Python 3.8 released a few weeks ago, and rather than jumping the gun on talking about the new features, I decided I wanted to get familiar with them and utilize them, and I would like to point out that I am absolutely going to be using a few of the key features implemented in Python 3. Regardless of whether or not these new features will be used or not is up to the programmer, but I think it will be important to get familiar with the new features so we can read Python written post 3.8.
We’ll kick it off with my favorite new feature,
The Walrus Operand
Why is it called that? Well because the operand looks like a walrus turned onto its side. Not only is this a great tool for use in while loops, like the example straight from Python’s documentation:
while (line := file.readline()) != "end": print(chunk)
More importantly for me, however is that the Walrus operand has the advantage of returning and setting in one line, so in other words, we can save a return like at the end of a statistical function for example, at the last calculation by just using the Walrus operand.
A little less flashy, but this is also a great change that doesn’t effect the classic nature of functional parameters. To make a positional parameter, we simply need to add a slash to the end of our parameter arguments when defining our function.
So we can now use the function where f is equal to Σx + Σy like:
result = func(x,y,ournum = sum(x) + sum(y))
Whereas previously, we would have to add more structure to allow such a thing.
ournum = sum(x) + sum(y) result = func(x,y,ournum)
Shared memory comes down to global sharing of data across all of Python, rather than manually pickling and transferring that data, or saving it to a file for access, now data can be accessed globally cross-location inside of Python using the multiprocessing model, which seg-ways us into our next addition,
Pickling is a fantastic tool for compressing and serializing data and code to be used elsewhere, and with the new additions to the Pickle module Python boasting more efficient serialization, and a lot more versatility within what you can pickle, which is very exciting for us on the Data Science side of the spectrum, and just as well for those working in Flask and Django with limited space remaining on their VPS webservers, this is absolutely a cause worth contributing to.
Last but not least, Python boasts they have improved Python’s C engine based back-end, which is another feature that I would say certainly needs attention. As a data-scientist, I frequently find myself hitting the limits of Python when it comes to processing data, which is unfortunate, but certainly there. It can be scary when reading in the data alone crashed the Jupyter kernel and you’re about to try and fit a pipeline to said data.
Well the good news is, it is getting better, and hopefully will continue to get better in the future. Of course we are referring to Python within the C API, and not Python inside of Python, or just regular Python. Often times running Python without said API configured can be a nightmare when dealing with absolutely enormous data-sets.
Even more fortunate for us, CPython also has had work done to improve the configuration of its API, which is exciting, as it will make optimization easier, as well as most likely allow newer users utilize the C engine in Python.
I really hope that Python continues to focus on stability, speed, and efficiency in the coming years. These improvements, though they might seem small, are certainly a bold direction and beautifully implemented, and in particular, something I have been enjoying. Those are my favorite features, but a few more were added, including:
Which all are welcome advancements in the Python world, especially for those working with large data-sets and tampering with memory limits.