LowEndBox - Cheap VPS, Hosting and Dedicated Server Deals

Has the Biggest Performance Bottleneck in Python Finally Been Slain?

PythonPython 3.13 is now out and the end of the Global Interpreter Lock may be in sight.

The Python Global Interpreter Lock (GIL) is basically a gatekeeper that allows just one thread to control the Python interpreter at a time. That means no matter how many threads or CPU cores you’re running, only one thread is executing Python code at any moment.

If you’re writing single-threaded programs, you won’t even notice the GIL. But if you’re running CPU-heavy, multi-threaded code, you’ll feel the pain—it becomes a bottleneck fast. That’s why the GIL is notorious in the Python world, especially in multi-threaded applications, where it hogs all the action.

Tons of people never run into this, because single-threaded Python programs don’t care, but as you try to scale up, you can hit a wall.

Why Does Python Have a GIL?

Python manages memory with reference counting. Each object has a counter keeping track of how many references point to it. When that number hits zero, Python frees up the memory.

However, if you have multiple threads incrementing and decrementing the counter simultaneously, you quickly descend into chaos.  To avoid this, Python could lock every shared object to prevent race conditions. But, more locks mean more chances of deadlocks, which is a mess in itself. So, Python takes a shortcut: a single lock on the interpreter. This keeps things simple—no deadlocks—but at a cost: it makes Python’s multi-threading practically single-threaded when it comes to CPU-bound tasks.

How much this affects your code depends on the code, of course.  If you’re I/O-bound, it doesn’t make any differences, but for CPU-bound systems, Python has always lagged true multi-threaded code.

Is the GIL Dead?

Starting with 3.13, a new experimental mode is available:

CPython now has experimental support for running in a free-threaded mode, with the global interpreter lock (GIL) disabled. This is an experimental feature and therefore is not enabled by default.

Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multi-core hardware. The free-threaded mode is experimental and work is ongoing to improve it: expect some bugs and a substantial single-threaded performance hit. Free-threaded builds of CPython support optionally running with the GIL enabled at runtime using the environment variable PYTHON_GIL or the command-line option -X gil=1.

My guess is that they’ll keep it experimental for a while until practically everyone is using it, then make it standard.

Would this be a Python 4 thing?  There is no Python 4 on the horizon but I don’t think this needs a “big release” number.  The language isn’t changing, just the engine underneath it, so you don’t have to work about breaking people’s code.  If things work out, this could be mainstream before 3.20.

raindog308

No Comments

    Leave a Reply

    Some notes on commenting on LowEndBox:

    • Do not use LowEndBox for support issues. Go to your hosting provider and issue a ticket there. Coming here saying "my VPS is down, what do I do?!" will only have your comments removed.
    • Akismet is used for spam detection. Some comments may be held temporarily for manual approval.
    • Use <pre>...</pre> to quote the output from your terminal/console, or consider using a pastebin service.

    Your email address will not be published. Required fields are marked *