To conclude this chapter, let's review what we have learned:
Python and the python ecosystem: With regards to RabbitMQ, we have shown how easy it is to have the first step into the AMQP world.
Pika: We made a deep dive into Pika, shown how simple it is to implement a common producer and consumer
Scraper project: We have implemented a simple scraper backbone that you can use in your future projects, using just Pika. Simplicity is key here, because more often than not, people can over-complicate things.
Pika API: We have covered the important surface of Pika's API and understood which scenarios they might come useful for.
Celery: We have introduced the de-facto background processing framework for Python that relies on RabbitMQ primarily (as well as other backends)
Celery Scraper: We have shown how we can easily, almost automatically migrate the "old" Scraper code into Celery, and seen how cleaner the code is.
Celery features: We went over some other celery features, including scheduling and HTTP hooks. And saw that in some point, Celery even implements our own custom code off-the-shelf (scheduler) within the Scraper project.
After reading this chapter, I advise you to remember Celery and Pika and learn them well in this order.
In your day-to-day Python work, using Celery will feel as a second language and using such proper tooling for background jobs will give you the X-Factor against any other Python programmer.