• This is gonna be fun...

Encultured AI is a for-profit company with a public benefit mission: to develop technologies promoting the long-term survival and flourishing of humanity and other sentient life.

About us

mindflower

Our principles

At Encultured, we believe advanced AI technology could be used to make the world a safer, happier, and healthier place to live. However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions. Given this, our goal is to develop products and services that help humanity steer toward the benefits and away from the risks of advanced AI systems.

Our current main strategy involves building a platform usable for AI safety and alignment experiments, comprising a suite of environments, tasks, and tools for building more environments and tasks. The platform itself will be an interface to a number of consumer-facing products, so our researchers and collaborators will have back-end access to services with real-world users. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks.

Like any start-up, we’re pretty optimistic about our potential to grow and make it big. Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.

Also, fun! We think our approach to AI has the potential to be very fun, and we’re very much hoping to keep it that way for the whole team :)

EXPERIMENTS

Experiments we’re interested in supporting

icon-align
Alignment with humans and human-like cultures
For AI alignment solutions to work in the real world, they have to work with the full richness of humans and human culture (e.g, transmissibility of culture through interaction, and language as a form of culture). We would like to develop benchmarks that capture aspects of this richness.
icon-align
Multi-paradigm cooperation
We’re interested in testing whether a given agent or group is able to interact safely and productively with another agent or group employing different alignment paradigms, different coordination norms, or different value functions.
icon-align
Assistants
AI “assistant” algorithms are intended to learn and align with the intentions of a particular person or group, and take safe actions assisting with those intentions.
icon-align
Mediators
We’re interested in whether “mediator” agents, when introduced into an interaction between two or more groups, can bring greater peace and prosperity between those groups.

Eventually, we want every safety and alignment researcher in the world to be able to test their solution ideas on our platform. This will not only yield benchmarks for comparison, but also a playground for interaction, where at least some acute safety failures and harmful interaction dynamics can be discovered in silico before reaching the real world. It’s probably irresponsible to expect our platform alone will ever be enough to certify a new AI technology as definitely safe, but if it appears unsafe on our platform then it probably shouldn’t be deployed in reality (which would probably be the opposite of fun).

Team

Meet the founders

Photo of Andrew Critch
Dr. Andrew Critch
CEO & Co-founder

Andrew is deeply driven to contribute to AI safety efforts on a global scale. Immediately prior to Encultured, Andrew spent 5 years working as a full-time research scientist at UC Berkeley, within the Center for Human-Compatible AI (CHAI), where he retains a part-time appointment. In 2017, Andrew also co-founded the Berkeley Existential Risk Initiative, a non-profit dedicated to improving humanity’s long-term prospects for survival and flourishing, where he volunteered as Executive Director for three years, and now volunteers as President. Andrew also established the Survival and Flourishing Fund and Survival and Flourishing Projects with the support of philanthropist Jaan Tallinn, and co-developed the S-process for philanthropic grant-making with Oliver Habryka.

In 2013, Andrew earned his Ph.D. in mathematics at UC Berkeley studying applications of algebraic geometry to machine learning models. During that time, he cofounded the Center for Applied Rationality (CFAR) and the Summer Program on Applied Rationality and Cognition (SPARC). He was offered university faculty and research positions in mathematics, mathematical biosciences, and philosophy, worked as an algorithmic stock trader at Jane Street Capital’s New York City office (2014-2015), and as a Research Fellow at the Machine Intelligence Research Institute (2015-2017).

Most recently, Andrew had the good sense to be super-impressed by Nick’s amazing engineering skills and realized they should found a company together :)

icon-email
critch@encultured.ai

Photo of Nick Hay
Dr. Nick Hay
CTO & Co-founder

Nick wants to ensure that powerful AI is developed for the benefit of humanity, and believes that to do this we need a good understanding not only of artificial intelligence but also of human intelligence, including deep questions about human minds and culture spanning anthropology, cognitive linguistics, and neuroscience. Prior to Encultured, Nick spent 5 years at Vicarious AI working on approaches to artificial general intelligence (AGI) grounded in real-world robotics. In 2015, Nick earned his PhD at UC Berkeley under Professor Stuart Russell applying reinforcement learning and Bayesian analysis to the metalevel control problem: how can an agent learn to control its own computations.

Nick first began thinking deeply about the impact of AI on humanity upon reading Eliezer Yudkowsky’s Creating Friendly AI in 2001, subsequently interning at MIRI in 2006 and attending the Singularity Summit in 2007. Originally hailing from New Zealand, Nick is still getting used to walking upside down.

icon-email
nick@encultured.ai

Job Openings

We’re hiring — join our team!

Join our team and help define it! We are always looking for collaborators and visionaries. Our current openings:

Job: Game Developer - Rust Game Engineer

Rust Game Engineer

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates with game development experience and an interest in all aspects of the game development process, ideally in Rust, or having extensive experience with a comparable systems programming language (e.g. C++). We’d like to see candidates who complement their skillset with an area of focus that might include one or more of the following:
  • gameplay network engineering,
  • graphics & rendering,
  • physics.
In addition, we have two tiers of qualification for this role: Junior Game Engineer & Senior Game Engineer. To qualify directly for the Senior title upon joining, we require candidates who have either:
  • a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field, or
  • extensive industry experience that is clearly beyond that of a typical college graduate in CS.
Apply for this Game Developer position
Job: Game Developer - Gameplay Network Engineer

Gameplay Network Engineer

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates to have experience with building gameplay network infrastructure for modern multiplayer online games. We also welcome experience with:
  • graphics & rendering,
  • physics,
  • rust.
In addition, we have two tiers of qualification for this role: Junior Network Architect & Senior Network Architect. To qualify directly for the Senior title upon joining, we require candidates who have either:
  • a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field, or
  • extensive industry experience that is clearly beyond that of a typical college graduate in CS.
Apply for this Game Developer position
Job: Game Developer - Graphics Engineer

Graphics Engineer

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates to have experience with modern real-time rendering techniques, such as ray tracing, radiosity, global illumination, optimal use of the GPU, and shader design. We’ll need you to apply these skills toward the development of a modern game engine, written in Rust, optimized for interoperability with modern AI/ML techniques. We also welcome experience with:
  • gameplay network engineering,
  • physics,
  • rust.
In addition, we have two tiers of qualification for this role: Junior Graphics Engineer & Senior Graphics Engineer. To qualify directly for the Senior title upon joining, we require candidates who have either:
  • a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field, or
  • extensive industry experience that is clearly beyond that of a typical college graduate in CS.
Apply for this Game Developer position
Job: ML Engineer - LLM Specialist

Machine Learning Engineer - LLM Specialist

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates to have experience with building and training large language models (LLMs).

In addition, we have two tiers of qualification for this role: Junior ML Engineer and Senior ML Engineer. To qualify directly for the Senior title upon joining, we require candidates who have received a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field.

These are not required, but our team will welcome and make good use of experience with:
  • building and training reinforcement learning algorithms and/or environments,
  • applying and developing AI alignment methods,
  • research on humans and human culture, including but not limited to background in the humanities, social sciences, cognitive science, and biology,
  • machine learning interpretability research and tools,
  • PhD-level research and writing.
Apply for this ML Engineer position
Job: ML Engineer - RL Specialist

Machine Learning Engineer - RL Specialist

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates to have experience with building and training reinforcement learning algorithms and/or environments.

In addition, we have two tiers of qualification for this role: Junior ML Engineer and Senior ML Engineer. To qualify directly for the Senior title upon joining, we require candidates who have received a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field.

These are not required, but our team will welcome and make good use of experience with:
  • building and training large language models (LLMs),
  • applying and developing AI alignment methods,
  • research on humans and human culture, including but not limited to background in the humanities, social sciences, cognitive science, and biology,
  • machine learning interpretability research and tools,
  • PhD-level research and writing.
Apply for this ML Engineer position
Job: ML Engineer - Reward-Modeling Specialist

Machine Learning Engineer - Reward-modeling Specialist

Location

Mostly remote, over Zoom calls and Discord. At least once per week we work together in person in the San Francisco / Berkeley area for an in-person workday, so if you live nearby it’d be great to have you attend those.

Compensation

Starting between $120k and $180k per year depending on experience, plus healthcare benefits, and equity incentives vesting over 5 years, with raises also becoming available with good individual performance or team-wide accomplishments that expand our revenue stream. We also offer a Safe-Harbor 401(k) with the IRS maximum employer matching.

Qualifications

In this role we need candidates to have experience with applying and developing algorithms that learn to model human preferences represented as reward functions.

In addition, we have two tiers of qualification for this role: Junior ML Engineer and Senior ML Engineer. To qualify directly for the Senior title upon joining, we require candidates who have received a Bachelor’s degree or above in computer science, physics, mathematics, or a closely related field.

These are not required, but our team will welcome and make good use of experience with:
  • building and training reinforcement learning algorithms and/or environments,
  • building and training large language models (LLMs),
  • applying and developing AI alignment methods,
  • research on humans and human culture, including but not limited to background in the humanities, social sciences, cognitive science, and biology,
  • machine learning interpretability research and tools,
  • PhD-level research and writing.
Apply for this ML Engineer position

Contact Us

Questions? Let’s talk!