LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

The Best Read of the Year: Situational Awareness

Leopold Aschenbrenner, a former safety researcher at ChatGPT creator OpenAI, has published a stunning series of articles forecasting Artificial Generalized Intelligence (AGI) – followed very shortly by Arficial Generalized Superintelligence (ASI) by 2027.

And he has a lot of charts, graphs, and links to explain his reasoning.

The paper is available online here:

Just reading the summaries is amazing.  Here are the summaries for the first two articles:

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.

AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.

And you’re only on part 2 of 10 at that point.

ChatGPT Future

 

 

raindog308

No Comments

    Leave a Reply

    Some notes on commenting on LowEndBox:

    • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
    • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
    • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

    Your email address will not be published. Required fields are marked *