Overview of Spark configurations

Find myself looking for an overview too often. So let’s create a rough overview of common used config for Spark. As a start, create a Spark Session with default config: from pyspark.sql import SparkSession spark = SparkSession.builder \ .master(SPARK_MASTER) \ .appname("app name") \ .getOrCreate() The Spark Context represents the connection to the cluster; communicaties with lower-level API’s and RDDs. Some resource settings on the driver: ... .config("spark.driver.memory", "8g") ... .config("spark.cores.max", "4") ....

November 8, 2021 · 2 min · 330 words · Joost

Provide Spark with cross-account access

In case you need to provide Spark with resources from a different AWS account, I found that quite tricky to figure out. Let’s assume you have two AWS accounts: the alpha account where you run Python with IAM role alpha-role and access to the Spark cluster; and the beta account where you have the S3 bucket you want to get access to. You could give S3 read access to the alpha-role, but it is more persistent and easier to manage by creating an access-role in the beta account that can be assumed by the alpha-role....

August 21, 2020 · 2 min · 413 words · Joost