I couldn’t understand why th e JRE was unable to reserve just 1.2GB of memory (from initial error message) on the driver when I had reserved 15GB in my spark-submit! Additionally, I knew I had 16GB of physical memory on the driver as well as each task node. Nohup spark-submit -master yarn -deploy-mode client -num-executors 10 -executor-cores 6 -executor-memory 15G -driver-memory 15G -conf '=false' -conf '=' -conf '=2048' -conf '.algorithm.version=2' cev2_processor.py -start_date -end_date > cev2_processor.log 2>&1
I was puzzled, since I had reserved plenty of memory (15G) on the driver as well as the executors # An error report file with more information is saved as: # /home/hadoop/ce_studies/hs_err_pid28194.log # Native memory allocation (mmap) failed to map 1234763776 bytes for committing reserved memory.
Recently, I submitted some pyspark ETL jobs on our data science EMR cluster, and not long after submission, I encountered a strange error: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005b7027000, 1234763776, 0) failed error='Cannot allocate memory' (errno=12) # There is insufficient memory for the Java Runtime Environment to continue.