Tested with Apache Spark 2.1.0, Python 2.7.13 and Java 1.8.0_112
Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem. To exit pyspark shell, type Ctrl-z and enter. Or the python command exit 5. PySpark with Jupyter notebook. Install conda findspark, to access spark instance from jupyter notebook.
For older versions of Spark and ipython, please, see also previous version of text.
Install Java Development Kit
Download and install it from oracle.com
Add following code to your e.g.
.bash_profile
Install Apache Spark
You can use Mac OS package manager Brew (http://brew.sh/)
Set up env variables
Add following code to your e.g.
.bash_profile
You can check
SPARK_HOME
path using following brew commandAlso check
py4j
version and subpath, it may differ from version to version.Ipython profile
Since profiles are not supported in
jupyter
and now you can see following deprecation warningIt seems that it is not possible to run various custom startup files as it was with
ipython
profiles. Thus, the easiest way will be to run pyspark
init script at the beginning of your notebook manually or follow alternative way.Run ipython
Initialize
pyspark
sc
variable should be availableAlternatively
You can also force
pyspark
shell command to run ipython web notebook instead of command line interactive interpreter. To do so you have to add following env variables:and then simply run
which will open a web notebook with
sc
available automatically.