Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

At what situation I can use Dask instead of Apache Spark? [closed]

I am currently using Pandas and Spark for data analysis. I found Dask provides parallelized NumPy array and Pandas DataFrame.

Pandas is easy and intuitive for doing data analysis in Python. But I find difficulty in handling multiple bigger dataframes in Pandas due to limited system memory.

Simple Answer:

Apache Spark is an all-inclusive framework combining distributed computing, SQL queries, machine learning, and more that runs on the JVM and is commonly co-deployed with other Big Data frameworks like Hadoop. ... Generally Dask is smaller and lighter weight than Spark.

I get to know below details from http://dask.pydata.org/en/latest/spark.html

  • Dask is light weighted
  • Dask is typically used on a single machine, but also runs well on a distributed cluster.
  • Dask to provides parallel arrays, dataframes, machine learning, and custom algorithms
  • Dask has an advantage for Python users because it is itself a Python library, so serialization and debugging when things go wrong happens more smoothly.
  • Dask gives up high-level understanding to allow users to express more complex parallel algorithms.
  • Dask is lighter weight and is easier to integrate into existing code and hardware.
  • If you want a single project that does everything and you’re already on Big Data hardware then Spark is a safe bet
  • Spark is typically used on small to medium sized cluster but also runs well on a single machine.

I understand more things about Dask from the below link https://www.continuum.io/blog/developer-blog/high-performance-hadoop-anaconda-and-dask-your-cluster

  • If you’re running into memory issues, storage limitations, or CPU boundaries on a single machine when using Pandas, NumPy, or other computations with Python, Dask can help you scale up on all of the cores on a single machine, or scale out on all of the cores and memory across your cluster.
  • Dask works well on a single machine to make use of all of the cores on your laptop and process larger-than-memory data
  • scales up resiliently and elastically on clusters with hundreds of nodes.
  • Dask works natively from Python with data in different formats and storage systems, including the Hadoop Distributed File System (HDFS) and Amazon S3. Anaconda and Dask can work with your existing enterprise Hadoop distribution, including Cloudera CDH and Hortonworks HDP.

http://dask.pydata.org/en/latest/dataframe-overview.html

Limitations

Dask.DataFrame does not implement the entire Pandas interface. Users expecting this will be disappointed.Notably, dask.dataframe has the following limitations:

  1. Setting a new index from an unsorted column is expensive
  2. Many operations, like groupby-apply and join on unsorted columns require setting the index, which as mentioned above, is expensive
  3. The Pandas API is very large. Dask.dataframe does not attempt to implement many pandas features or any of the more exotic data structures like NDFrames

Thanks to the Dask developers. It seems like very promising technology.

Overall I can understand Dask is simpler to use than spark. Dask is as flexible as Pandas with more power to compute with more cpu's parallely.

I understand all the above facts about Dask.

So, roughly how much amount of data(in terabyte) can be processed with Dask?

like image 510
Hariprasad Avatar asked Aug 10 '16 20:08

Hariprasad


People also ask

Should I use DASK or Spark?

Spark is mature and all-inclusive. If you want a single project that does everything and you're already on Big Data hardware, then Spark is a safe bet, especially if your use cases are typical ETL + SQL and you're already using Scala. Dask is lighter weight and is easier to integrate into existing code and hardware.

What do you think the most useful DASK feature is?

Dask can enable efficient parallel computations on single machines by leveraging their multi-core CPUs and streaming data efficiently from disk. It can run on a distributed cluster. Dask also allows the user to replace clusters with a single-machine scheduler which would bring down the overhead.

Where is DASK used?

Dask is used in Climate Science, Energy, Hydrology, Meteorology, and Satellite Imaging. Oceanographers produce massive simulated datasets of the earth's oceans. They couldn't even look at the outputs before Dask.

Should I use DASK or Pandas?

If you love Pandas and Numpy but were sometimes struggling with data that would not fit into RAM then Dask is definitely what you need. Dask supports the Pandas dataframe and Numpy array data structures and is able to either be run on your local computer or be scaled up to run on a cluster.


1 Answers

you may want to read Dask comparison to Apache Spark

Apache Spark is an all-inclusive framework combining distributed computing, SQL queries, machine learning, and more that runs on the JVM and is commonly co-deployed with other Big Data frameworks like Hadoop. It was originally optimized for bulk data ingest and querying common in data engineering and business analytics but has since broadened out. Spark is typically used on small to medium sized cluster but also runs well on a single machine.

Dask is a parallel programming library that combines with the Numeric Python ecosystem to provide parallel arrays, dataframes, machine learning, and custom algorithms. It is based on Python and the foundational C/Fortran stack. Dask was originally designed to complement other libraries with parallelism, particular for numeric computing and advanced analytics, but has since broadened out. Dask is typically used on a single machine, but also runs well on a distributed cluster.

Generally Dask is smaller and lighter weight than Spark. This means that it has fewer features and instead is intended to be used in conjunction with other libraries, particularly those in the numeric Python ecosystem.

like image 181
MaxU - stop WAR against UA Avatar answered Oct 14 '22 13:10

MaxU - stop WAR against UA