Later this week, humans from around the world are going to gather for an AI Safety Summit hosted by the UK. The briefing paper is already out, and it is a well-written overview of the state of AI and its risks.
The paper describes how Large Language Models (LLMs) like ChatGPT work, in surprisingly easy-to-understand language. You start with a program that guesses the next word in a sentence. At first, its guess is purely random, but as it analyzes milllions/billions of conversations, soon it has a sophisticated probability model that allows it to sound quite human-like. That is essentially ChatGPT.
If you marry ChatGPT with “scaffolds” (hooks into other technologies so these models can drive software, such as web browsers), you are at the frontier of AI research in 2023.
The paper outlines various risks, and is careful not to get into “OMG SkyNet is coming” type of science fiction. It’s talking about risks with AI as of now.
Some things it highlights:
- Market concentration is a significant risk. At present, it takes computing power and PhDs to make new language models and that’s a lot easier for Google than a garage startup.
- If you thought fake news, misinformation, etc. was bad before, get ready for the supercharged version. The technology can produce literally endless quantities of very realistic-sounding news and information. Just the quantity of bad information can degrade the environment.
- Labor market disruption. In other words, people losing their jobs. There’s a nice chart in the doc, but to give you an example, an estimated 40% of legal work can be performed by AI. If someone has sunk a couple hundred thousands of dollars into a legal eduction, it’s pretty tough to do anything else, but that’s the situation people are going to find themselves in.
- Bias and discrimination in models. This was argued less convincingly I thought, but it’s been widely discussed.
- Enfeeblement. The report doesn’t use that term, but it talks about humans handing control to AI systems and not being able or wanting to take it back. This is not “OMG, we can’t turn it off and it’s firing nuclear missiles” but rather more mundane things like HR systems that automatically screen resumes and no one wants to go back to doing it manually because it costs too much.
The report is a good and interesting read – I recommend it.
Related Posts:
- I Don’t Have Time to Win the Hutter Prize, So Maybe You’d Like to Snag 500’000€ With My Idea - December 15, 2024
- Mango Mail’s Winter Sale is Only Here Until December 31! Act Now to Get Up to 6 Months Free on Quality Email Hosting - December 14, 2024
- Silicom Network’s Lifetime Deals Are Back!Get Started for $9 ONCE in Many Cities Around the World - December 13, 2024
It’s interesting to hear about the upcoming AI Safety Summit hosted by the UK and the well-written briefing paper that has already been released. The paper explains how Large Language Models (LLMs) like ChatGPT work and how they become more sophisticated as they analyze millions or billions of conversations.
ChatGPT is helpful for work.
The article “All the Ways AI Is Going to Ruin the World, Courtesy the UK” highlights concerns about the development of AI, from losing control to unpredictable situations. Similar to the game FNAF, where AI animatronics cause terrifying surprises, AI in the real world can also bring unforeseen challenges if not managed carefully.
The AI Safety Summit’s briefing paper offers a clear view of how LLMs like ChatGPT evolve from random guesses to sophisticated models. It’s fascinating to see how AI’s frontier, similar to strategizing in Retro Bowl, involves careful management of current risks.
It’s intriguing to consider the various ways AI might impact our world, especially from the perspective of different countries. While the UK brings its own unique insights, it’s essential to balance this view with the potential benefits AI can offer.