LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

All the Ways AI Is Going to Ruin the World, Courtesy the UK

AI Taking Over the WorldLater this week, humans from around the world are going to gather for an AI Safety Summit hosted by the UK.  The briefing paper is already out, and it is a well-written overview of the state of AI and its risks.

The paper describes how Large Language Models (LLMs) like ChatGPT work, in surprisingly easy-to-understand language.  You start with a program that guesses the next word in a sentence.  At first, its guess is purely random, but as it analyzes milllions/billions of conversations, soon it has a sophisticated probability model that allows it to sound quite human-like.  That is essentially ChatGPT.

If you marry ChatGPT with “scaffolds” (hooks into other technologies so these models can drive software, such as web browsers), you are at the frontier of AI research in 2023.

The paper outlines various risks, and is careful not to get into “OMG SkyNet is coming” type of science fiction.  It’s talking about risks with AI as of now.

Some things it highlights:

  • Market concentration is a significant risk.  At present, it takes computing power and PhDs to make new language models and that’s a lot easier for Google than a garage startup.
  • If you thought fake news, misinformation, etc. was bad before, get ready for the supercharged version.  The technology can produce literally endless quantities of very realistic-sounding news and information.  Just the quantity of bad information can degrade the environment.
  • Labor market disruption.  In other words, people losing their jobs.  There’s a nice chart in the doc, but to give you an example, an estimated 40% of legal work can be performed by AI.  If someone has sunk a couple hundred thousands of dollars into a legal eduction, it’s pretty tough to do anything else, but that’s the situation people are going to find themselves in.
  • Bias and discrimination in models.  This was argued less convincingly I thought, but it’s been widely discussed.
  • Enfeeblement.  The report doesn’t use that term, but it talks about humans handing control to AI systems and not being able or wanting to take it back.  This is not “OMG, we can’t turn it off and it’s firing nuclear missiles” but rather more mundane things like HR systems that automatically screen resumes and no one wants to go back to doing it manually because it costs too much.

The report is a good and interesting read – I recommend it.

raindog308

2 Comments

  1. It’s interesting to hear about the upcoming AI Safety Summit hosted by the UK and the well-written briefing paper that has already been released. The paper explains how Large Language Models (LLMs) like ChatGPT work and how they become more sophisticated as they analyze millions or billions of conversations.

    October 31, 2023 @ 12:31 am | Reply
  2. ChatGPT is helpful for work.

    December 19, 2023 @ 9:55 pm | Reply

Leave a Reply

Some notes on commenting on LowEndBox:

  • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
  • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
  • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

Your email address will not be published. Required fields are marked *