ADVERTISEMENT

Sports

Python 3.11 is on average 25% faster than 3.10, compiled with GCC on Ubuntu Linux, acceleration can range from 10 to 60%

ADVERTISEMENT


Is Python 3.11 Really the Fastest Python? One thing is certain, CPython 3.11 is on average 25% faster than CPython 3.10 when measured with the pyperformance benchmark suite and compiled with GCC on Ubuntu Linux. Depending on the workload, the acceleration can reach 10 to 60%. On October 4th, the Python development team announced improvements and new features in Python 3.11. In an effort to improve the performance of the Python programming language, Microsoft released Faster CPython.

It’s a Microsoft-funded project, members of which include Python inventor Guido van Rossum, Microsoft Senior Software Engineer Eric Snow, and Mark Shannon, who has a contract with Microsoft as the project’s technical lead. Python is widely known for being this slow. While Python will never match the performance of low-level languages ​​like C, Fortran, or even Java, we would like it to be competitive with fast implementations of scripting languages, like V8 for JavaScript, says Mark Shannon.

Python is an interpreted, multi-paradigm, and cross-platform programming language. Promote structured, functional, and object-oriented imperative programming. It has strong dynamic typing, automatic memory management through garbage collection and an exception handling system; it is therefore similar to Perl, Ruby, Scheme, Smalltalk and Tcl.

To be efficient, virtual machines for dynamic languages ​​must specialize the code they execute based on the types and values ​​of the program they are running. This specialization is often associated with Just-in-time (JIT) compilers, but it is also beneficial without generating machine code. The Faster CPython project focuses on two main areas of Python: faster startup and faster execution.

Note that specialization improves performance and adaptation allows the interpreter to change quickly as the pattern of use of a program changes, thus limiting the amount of extra work caused by poor specialization.

Faster start

Python caches bytecode in the directory __pycache__ to speed up the loading of forms. In version 3.10, the running Python modules looked like this:

Read __pycache__ -> Cancel marshal -> Heap allocated code object -> Evaluate

In Python 3.11, the core modules essential for starting Python are “locked”. This means that their code objects (and bytecode) are statically allocated by the interpreter. This reduces the steps in the form execution process to this:

Statically allocated code object -> Evaluate

Starting the interpreter is now 10-15% faster in Python 3.11. This has a big impact for short-running programs that use Python. Faster Execution Python frames are created whenever Python calls a Python function. This frame contains runtime information. The new frame optimizations are as follows:

  • Simplified frame creation process;
  • avoid memory allocation by generously reusing frame space on stack C;
  • Rationalization of the internal structure of the frame so that it contains only essential information. Frames previously contained additional debugging and memory management information.

Old-fashioned frame objects are now only created at the request of debugger or Python introspection functions such as sys._getframe or inspect.currentframe. For most user code, no frame objects are created. As a result, almost all calls to Python functions have been significantly speeded up. We measured a 3-7% increase in pyperformance.

During a call to a Python function, Python calls an evaluative C function to interpret the code for that function. This effectively limits pure Python recursion which is reliable for the C stack. With 3.11, when CPython detects Python code calling another Python function, it creates a new frame and “jumps” to the new code within the new frame. This avoids calling the C interpreter function.

Faster CPython explores optimizations for CPython. As mentioned earlier, the core team is funded by Microsoft to work full time on this project. Pablo Galindo Salgado is also funded by Bloomberg LP to work part-time on the project. Finally, many contributors are community volunteers.

The Python language is placed under a free license close to the BSD license and works on most computer platforms, from smartphones to computers, from Windows Unix with in particular GNU / Linux via macOS, or even Android, iOS, and can also be translated into Java or .NET. It is designed to maximize programmers’ productivity by offering high-level tools and easy-to-use syntax.

Most Python function calls no longer consume space on the C stack. This speeds up most of these calls. This speeds up most of these calls. In simple recursive functions such as Fibonacci or factorial, a 1.7x increase in speed was observed. This also means that recursive functions can sift much deeper (if the user increases the recursion limit). We measured a 1-3% improvement in the performance of the pi.

The addition of a specialized and adaptive CPython interpreter will bring significant performance improvements. It is difficult to give meaningful figures, because it depends a lot on benchmarks and on work that has not yet been done. Extensive experiments suggest accelerations of up to 50%. Even if the speed gain were only 25%, it would still be a nice improvement, ”says Shannon.

Specifically, we want to achieve these performance goals with CPython for the benefit of all Python users, including those who cannot use PyPy or other alternative virtual machines, “he adds. When Devclass spoke to Python Leadership Council member and Lead Developer Pablo Galindo of the new Memray memory profiler, described how the Python team is using the work of Microsoft in version 3.11.

One of the things we’re doing is making the interpreter faster, says Pablo Galindo, Python board member and lead developer. But it will also use a little more memory, just a little, because most of these optimizations have some sort of memory cost, since we have to archive things for later use, or because we have an optimized version but a Sometimes someone needs to request a non-optimized version for debugging, so we need to archive both.

PEP 659: Adaptive Interpreter Specialization

PEP 659 is one of the key elements of the Faster CPython project. The general idea is that although Python is a dynamic language, most of the code has regions where objects and types rarely change. This concept is known as type stability. At run time, Python tries to find common patterns and type stability in the running code. Python then replaces the current operation with a more specialized operation.

This specialized operation uses fast paths available only for these use cases / types, which generally perform better than their generic counterparts. It also invokes another concept called “inline caching”, in which Python caches the results of expensive operations directly in bytecode. The specialist will also combine a few pairs of common statements into one super instruction. This reduces the overhead when running.

Python only specializes when it sees “hot” code (executed multiple times). This saves Python from wasting time with run-once code. Python can also despecialize when the code is too dynamic or when usage changes. Specialization is attempted periodically, and specialization attempts are not too expensive. This allows the specialization to adapt to new circumstances.

The pyperformance project aims to be an authoritative source of benchmarks for all Python implementations. The focus is on real-world benchmarks, rather than synthetic benchmarks, using full applications wherever possible.

Source: Python

And she ?

Python 3.11 is on average 25% faster than 3.10, do you think?

What do you think of Python 11?

Do you have experience with Python?

are you in favor or against the removal of the Global Interpreter Lock?

See also:

Python 3.11 will improve the location of errors in tracebacks and bring new features

Version 3.2 of Django framework is available, with automatic detection of AppConfig, brings new decorators to the admin module

Django 2.0 is available in stable version, what’s new in this version of the web framework written in Python?

JetBrains supports Django: get a 30% discount on the purchase of an individual PyCharm Professional license and all proceeds will be donated to the Django Foundation

ADVERTISEMENT

Leave a Comment