CommunityNews

CommunityNews

OpenAI: our principles

AI has the potential to significantly improve many aspects of society.

This technology, like others before, will give people more capability and agency; what people will be able to do with AI will dwarf what people could do with steam engines or electricity.

We envision a world with widespread flourishing at a level that is currently difficult to imagine, and a world in which individual potential, agency, and fulfillment significantly increase. A lot of the things we’ve only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today.

But this outcome is not guaranteed. Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people. We believe the latter is much better, and our goal is to put truly general AI in the hands of as many people as possible. Like the present, the future won’t be all bad or all good, but the decisions we make now can help maximize the good.

Our mission is to ensure that AGI benefits all of humanity. Here are the principles that guide our work.

  1. Democratization. We will resist the potential of this technology to consolidate power in the hands of the few.

This means that in addition to giving everyone access to AI, we need to ensure that key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.

  1. Empowerment. We believe AI can empower everyone to achieve their goals, learn more, be happier and more fulfilled, and pursue their dreams, and that society as a whole will benefit from this.

Achieving this requires letting people explore the enormous potential in front of us, and we need to build products that enable this. Users should reliably be able to accomplish increasingly valuable tasks with our services.

The world is diverse and people have different needs. We want to give our users the autonomy they need and allow as much as we reasonably can.

Although we want to give our users very broad latitude in how they use our services and strongly believe that AI will be hugely beneficial on the whole, we have a responsibility to build and deploy it in a way that minimizes harm. This includes of course preventing catastrophic harm, but also minimizing local harms and avoiding potential corrosive societal effects. This will mean erring on the side of caution in the face of uncertainty, and relaxing constraints with more evidence.

  1. Universal prosperity. We want a future where everyone can have an excellent life.

By putting easy-to-use AI systems with a lot of compute power into the hands of everyone, we believe people will find new ways to generate value and massively improve quality-of-life for everyone, especially with discovery of new science.

For prosperity to be fully realized and widely shared, we believe that 1) our governments may need to consider new economic models to ensure that everyone can participate in the value creation in front of us and 2) we need to build huge amounts of AI infrastructure and develop new technology to drive costs of AI infrastructure way down.

A lot of the things that we do that look weird—buying huge amounts of compute while our revenue is relatively small, vertically integrating to lower costs and make our technology easier to use, pushing to build datacenters all around the world, and much more—are driven by our fundamental belief in a future of universal prosperity.

  1. Resilience. AI will introduce new risks, and we will work with other companies, ecosystems, governments, and society to solve them. We will make significant use of our Foundation’s resources to support this work.

No AI lab can ensure a good future alone. For an obvious example, there may be extremely capable models that make it easier to create a new pathogen, and we need a society-wide approach to defend against this with pathogen-agnostic countermeasures. For another example, as the cybersecurity capabilities of models increase, we need to rapidly use these models to help secure open-source software and critical infrastructure, while training the models to help everyone create more secure software.

This is an expansion of our long-held strategy of iterative deployment; we believe society needs to contend with each successive level of AI capability, understand it, integrate it, and figure out the best path forward together. This cannot be done in a vacuum; society and technology co-evolve, and that requires time.

We do not mean this as our only safety strategy; we also need to make safe systems and continue to do great work on technical alignment.

We expect there will be periods where we need to collaborate with governments, international agencies, and other AGI efforts to ensure that we have sufficiently solved serious alignment, safety, or societal problems before proceeding further with our work.

  1. Adaptability. We continue to believe the only way to meet the challenges of a very unpredictable future is to be prepared to update our positions as we learn more. We also acknowledge that OpenAI is a much larger force in the world than it was a few years ago, and we will be transparent about when, how, and why our operating principles change. As a concrete example, while we are quite confident that universal prosperity will remain really important, we can imagine periods in the future where we have to trade off some empowerment for more resilience.

AI development has brought many surprises, and more are still to come. As the technology advances, its emergent behaviors will become increasingly difficult to predict. We embrace that uncertainty⁠ by advancing capabilities carefully, deploying systems iteratively, and learning from their interactions with the world.

It wasn’t that long ago that we were nervous about releasing the weights of GPT‑2 because we weren’t sure what the impacts on society will be. Obviously in retrospect that was a misplaced worry, but it led to us discovering the strategy of iterative deployment, which has been one of the most important things we’ve figured out.

__

We are heading into a very impactful phase as the technology continues to improve. It’s very fair to critique us on every decision; we deserve an enormous amount of scrutiny given the weight of what we are doing. We will not get everything right, but we will learn quickly and course-correct.

We are committed to doing our part to make the future better than the past; we feel lucky to get to take on such important work.

Read in full here:

https://openai.com/index/our-principles/

Where Next?

Popular Ai topics Top

New
First poster: CommunityNews
Artificial intelligence and machine learning exist on the back of a lot of hard work from humans. Alongside the scientists, there are th...
#ai
New
CommunityNews
DeepMind’s New AI With a Memory Outperforms Algorithms 25 Times Its Size. DeepMind’s model, with just 7 billion parameters, outperformed...
New
First poster: bot
A research group has taught AI to magnetically wrangle a high-powered stream of plasma used for fusion research — but wait! Put away your...
New
First poster: CommunityNews
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and se...
New
First poster: bot
Chri Besenbruch, CEO of Deep Render, sees many problems with the way video compression standards are developed today. He thinks they aren...
New
First poster: bot
Ghostwriter - Code faster with AI. An AI pair programmer that helps you write better code, faster.
New
First poster: bot
AI and the Future of Pixel Art. Creative industries are undergoing a 0 to 1 moment. If you didn’t know, now you do. The impact that AI w...
New
First poster: AstonJ
From fear to optimism: why I am convinced AI is worth embracing.
New
CommunityNews
But the reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had ...
New

Other popular topics Top

PragmaticBookshelf
Stop developing web apps with yesterday’s tools. Today, developers are increasingly adopting Clojure as a web-development platform. See f...
New
New
PragmaticBookshelf
From finance to artificial intelligence, genetic algorithms are a powerful tool with a wide array of applications. But you don't need an ...
New
AstonJ
I’ve been hearing quite a lot of comments relating to the sound of a keyboard, with one of the most desirable of these called ‘thock’, he...
New
PragmaticBookshelf
Rust is an exciting new programming language combining the power of C with memory safety, fearless concurrency, and productivity boosters...
New
Exadra37
I am asking for any distro that only has the bare-bones to be able to get a shell in the server and then just install the packages as we ...
New
PragmaticBookshelf
Learn different ways of writing concurrent code in Elixir and increase your application's performance, without sacrificing scalability or...
New
PragmaticBookshelf
Create efficient, elegant software tests in pytest, Python's most powerful testing framework. Brian Okken @brianokken Edited by Kat...
New
DevotionGeo
I have always used antique keyboards like Cherry MX 1800 or Cherry MX 8100 and almost always have modified the switches in some way, like...
New
AstonJ
This is a very quick guide, you just need to: Download LM Studio: https://lmstudio.ai/ Click on search Type DeepSeek, then select the o...
New