Top 5 Mistakes When Writing Spark Applications

In the world of distributed computing, Spark has simplified development and open the doors for many to start writing distributed programs. Folks with little to no distributed coding experience can now start writing just a couple lines of code that will get 100s or 1000s of machines, immediately, working on creating business value. However, even through Spark code is easy to write and read, that doesn’t mean that users don’t run into issues of long running, slow performing jobs or out of memory errors. Thankfully most of the issues with using Spark have nothing to do with Spark but the approach we take when using it. In the presentation below from Spark Summit 2016, Mark Grover goes over the top 5 things that he’s seen in the field that prevent people from getting the most out of their Spark clusters. When some of these issues are addressed, it is not uncommon to see the same job running 10x or 100x faster with the same clusters, the same data, just a different approach.

Mark is a software engineer working on Apache Spark at Cloudera. He is a co-author of Hadoop Application Architectures book and also wrote a section in Programming Hive book. Mark is also a committer on Apache Bigtop and a committer and PMC member on Apache Sentry. He has contributed to a number of open source projects including Apache Hadoop, Apache Hive, Apache Sqoop and Apache Flume projects.

 

Sign up for the free insideAI News newsletter.