Greenlets—aka cooperative threads, user-level threads, green threads, or fibers—are similar to threads, but the application has to schedule them manually. Unlike a process or a thread, your greenlet function just keeps running until it decides to yield control to someone else.
gevent is a coroutine -based Python networking library that uses greenlet to provide a high-level synchronous API on top of the libev or libuv event loop. Features include: Fast event loop based on libev or libuv. Lightweight execution units based on greenlets.
The "greenlet" package is a spin-off of Stackless, a version of CPython that supports micro-threads called "tasklets". Tasklets run pseudo-concurrently (typically in a single or a few OS-level threads) and are synchronized with data exchanges on "channels".
The python code of the newly spawned greenlet uses the heap in a normal way and the interpreter continues using the C-stack as usual. When a greenlet is switched to from a switching greenlet, the relevant part of the C-stack (starting from the base address of the switchng greenlet) is copied to the heap.
Greenlets provide concurrency but not parallelism. Concurrency is when code can run independently of other code. Parallelism is the execution of concurrent code simultaneously. Parallelism is particularly useful when there's a lot of work to be done in userspace, and that's typically CPU-heavy stuff. Concurrency is useful for breaking apart problems, enabling different parts to be scheduled and managed more easily in parallel.
Greenlets really shine in network programming where interactions with one socket can occur independently of interactions with other sockets. This is a classic example of concurrency. Because each greenlet runs in its own context, you can continue to use synchronous APIs without threading. This is good because threads are very expensive in terms of virtual memory and kernel overhead, so the concurrency you can achieve with threads is significantly less. Additionally, threading in Python is more expensive and more limited than usual due to the GIL. Alternatives to concurrency are usually projects like Twisted, libevent, libuv, node.js etc, where all your code shares the same execution context, and register event handlers.
It's an excellent idea to use greenlets (with appropriate networking support such as through gevent) for writing a proxy, as your handling of requests are able to execute independently and should be written as such.
Greenlets provide concurrency for the reasons I gave earlier. Concurrency is not parallelism. By concealing event registration and performing scheduling for you on calls that would normally block the current thread, projects like gevent expose this concurrency without requiring change to an asynchronous API, and at significantly less cost to your system.
Correcting for @TemporalBeing 's answer above, greenlets are not "faster" than threads and it is an incorrect programming technique to spawn 60000 threads to solve a concurrency problem, a small pool of threads is instead appropriate. Here is a more reasonable comparison (from my reddit post in response to people citing this SO post).
import gevent
from gevent import socket as gsock
import socket as sock
import threading
from datetime import datetime
def timeit(fn, URLS):
t1 = datetime.now()
fn()
t2 = datetime.now()
print(
"%s / %d hostnames, %s seconds" % (
fn.__name__,
len(URLS),
(t2 - t1).total_seconds()
)
)
def run_gevent_without_a_timeout():
ip_numbers = []
def greenlet(domain_name):
ip_numbers.append(gsock.gethostbyname(domain_name))
jobs = [gevent.spawn(greenlet, domain_name) for domain_name in URLS]
gevent.joinall(jobs)
assert len(ip_numbers) == len(URLS)
def run_threads_correctly():
ip_numbers = []
def process():
while queue:
try:
domain_name = queue.pop()
except IndexError:
pass
else:
ip_numbers.append(sock.gethostbyname(domain_name))
threads = [threading.Thread(target=process) for i in range(50)]
queue = list(URLS)
for t in threads:
t.start()
for t in threads:
t.join()
assert len(ip_numbers) == len(URLS)
URLS_base = ['www.google.com', 'www.example.com', 'www.python.org',
'www.yahoo.com', 'www.ubc.ca', 'www.wikipedia.org']
for NUM in (5, 50, 500, 5000, 10000):
URLS = []
for _ in range(NUM):
for url in URLS_base:
URLS.append(url)
print("--------------------")
timeit(run_gevent_without_a_timeout, URLS)
timeit(run_threads_correctly, URLS)
Here are some results:
--------------------
run_gevent_without_a_timeout / 30 hostnames, 0.044888 seconds
run_threads_correctly / 30 hostnames, 0.019389 seconds
--------------------
run_gevent_without_a_timeout / 300 hostnames, 0.186045 seconds
run_threads_correctly / 300 hostnames, 0.153808 seconds
--------------------
run_gevent_without_a_timeout / 3000 hostnames, 1.834089 seconds
run_threads_correctly / 3000 hostnames, 1.569523 seconds
--------------------
run_gevent_without_a_timeout / 30000 hostnames, 19.030259 seconds
run_threads_correctly / 30000 hostnames, 15.163603 seconds
--------------------
run_gevent_without_a_timeout / 60000 hostnames, 35.770358 seconds
run_threads_correctly / 60000 hostnames, 29.864083 seconds
the misunderstanding everyone has about non-blocking IO with Python is the belief that the Python interpreter can attend to the work of retrieving results from sockets at a large scale faster than the network connections themselves can return IO. While this is certainly true in some cases, it is not true nearly as often as people think, because the Python interpreter is really, really slow. In my blog post here, I illustrate some graphical profiles that show that for even very simple things, if you are dealing with crisp and fast network access to things like databases or DNS servers, those services can come back a lot faster than the Python code can attend to many thousands of those connections.
Taking @Max's answer and adding some relevance to it for scaling, you can see the difference. I achieved this by changing the URLs to be filled as follows:
URLS_base = ['www.google.com', 'www.example.com', 'www.python.org', 'www.yahoo.com', 'www.ubc.ca', 'www.wikipedia.org']
URLS = []
for _ in range(10000):
for url in URLS_base:
URLS.append(url)
I had to drop out the multiprocess version as it fell before I had 500; but at 10,000 iterations:
Using gevent it took: 3.756914
-----------
Using multi-threading it took: 15.797028
So you can see there is some significant difference in I/O using gevent
This is interesting enough to analyze. Here is a code to compare performance of greenlets versus multiprocessing pool versus multi-threading:
import gevent
from gevent import socket as gsock
import socket as sock
from multiprocessing import Pool
from threading import Thread
from datetime import datetime
class IpGetter(Thread):
def __init__(self, domain):
Thread.__init__(self)
self.domain = domain
def run(self):
self.ip = sock.gethostbyname(self.domain)
if __name__ == "__main__":
URLS = ['www.google.com', 'www.example.com', 'www.python.org', 'www.yahoo.com', 'www.ubc.ca', 'www.wikipedia.org']
t1 = datetime.now()
jobs = [gevent.spawn(gsock.gethostbyname, url) for url in URLS]
gevent.joinall(jobs, timeout=2)
t2 = datetime.now()
print "Using gevent it took: %s" % (t2-t1).total_seconds()
print "-----------"
t1 = datetime.now()
pool = Pool(len(URLS))
results = pool.map(sock.gethostbyname, URLS)
t2 = datetime.now()
pool.close()
print "Using multiprocessing it took: %s" % (t2-t1).total_seconds()
print "-----------"
t1 = datetime.now()
threads = []
for url in URLS:
t = IpGetter(url)
t.start()
threads.append(t)
for t in threads:
t.join()
t2 = datetime.now()
print "Using multi-threading it took: %s" % (t2-t1).total_seconds()
here are the results:
Using gevent it took: 0.083758
-----------
Using multiprocessing it took: 0.023633
-----------
Using multi-threading it took: 0.008327
I think that greenlet claims that it is not bound by GIL unlike the multithreading library. Moreover, Greenlet doc says that it is meant for network operations. For a network intensive operation, thread-switching is fine and you can see that the multithreading approach is pretty fast. Also it's always prefeerable to use python's official libraries; I tried installing greenlet on windows and encountered a dll dependency problem so I ran this test on a linux vm. Alway try to write a code with the hope that it runs on any machine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With