Bootstrap

Hadoop内存配置

Hadoop的内存配置有两种方法:
1)利用手动安装hadoop的帮助脚本;
2)手动计算yarn和mapreduce内存大小进行配置。

此处只记录脚本计算方法:

1、用wget命令从hortonworks下载脚本
wget http://public-repo-1.hortonworks.com/HDP/tools/2.1.1.0/hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz

2、解压文件
tar -xvf hdp_manual_install_rpm_helper_files-2.1.1.385.tar.gz hdp_manual_install_rpm_helper_files-2.1.1.385

3、进入脚本所在目录 cd hdp_manual_install_rpm_helper_files-2.1.1.385/scripts
运行脚本:python hdp-configuration-utils.py,具体参数如下:

OptionDescription
-c CORESThe number of cores on each host.
-m MEMORYThe amount of memory on each host in GB.
-d DISKSThenumber of disks on each host.
-k HBASE"True"if HBase is installed, “False” if not.

其中:Core的数量可以通过nproc命令计算;内存大小可以通过free –m命令来计算;磁盘的数量可以同过lsblk –s或sudo fdisk –l命令来查看。

[root@server31 scripts]# nproc
48
[root@server31 scripts]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128658       26607         739          37      101311      101165
Swap:          4095         611        3484
[root@server31 scripts]# lsblk -s
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdm1      8:193  0     1G  0 part /boot
└─sdm     8:192  0 228.8G  0 disk
cl-root 253:0    0    50G  0 lvm  /
└─sdn1    8:209  0  50.1G  0 part
  └─sdn   8:208  0  50.1G  0 disk
cl-swap 253:1    0     4G  0 lvm  [SWAP]
└─sda1    8:1    0   1.8T  0 part
  └─sda   8:0    0   1.8T  0 disk
cl-home 253:2    0  22.1T  0 lvm  /home
├─sda1    8:1    0   1.8T  0 part
│ └─sda   8:0    0   1.8T  0 disk
├─sdb1    8:17   0   1.8T  0 part
│ └─sdb   8:16   0   1.8T  0 disk
├─sdc1    8:33   0   1.8T  0 part
│ └─sdc   8:32   0   1.8T  0 disk
├─sdd1    8:49   0   1.8T  0 part
│ └─sdd   8:48   0   1.8T  0 disk
├─sde1    8:65   0   1.8T  0 part
│ └─sde   8:64   0   1.8T  0 disk
├─sdf1    8:81   0   1.8T  0 part
│ └─sdf   8:80   0   1.8T  0 disk
├─sdg1    8:97   0   1.8T  0 part
│ └─sdg   8:96   0   1.8T  0 disk
├─sdh1    8:113  0   1.8T  0 part
│ └─sdh   8:112  0   1.8T  0 disk
├─sdi1    8:129  0   1.8T  0 part
│ └─sdi   8:128  0   1.8T  0 disk
├─sdj1    8:145  0   1.8T  0 part
│ └─sdj   8:144  0   1.8T  0 disk
├─sdk1    8:161  0   1.8T  0 part
│ └─sdk   8:160  0   1.8T  0 disk
├─sdl1    8:177  0   1.8T  0 part
│ └─sdl   8:176  0   1.8T  0 disk
├─sdm2    8:194  0 227.8G  0 part
│ └─sdm   8:192  0 228.8G  0 disk
└─sdn1    8:209  0  50.1G  0 part
  └─sdn   8:208  0  50.1G  0 disk

故:python hdp-configuration-utils.py -c 48 -m 128 -d 16 -k True
返回结果如下:

[root@server31 scripts]# python hdp-configuration-utils.py  -c 48 -m 128 -d 16 -k True
 Using cores=48 memory=128GB disks=16 hbase=True
 Profile: cores=48 memory=81920MB reserved=48GB usableMem=80GB disks=16
 Num Container=29
 Container Ram=2560MB
 Used Ram=72GB
 Unused Ram=48GB
 yarn.scheduler.minimum-allocation-mb=2560
 yarn.scheduler.maximum-allocation-mb=74240
 yarn.nodemanager.resource.memory-mb=74240
 mapreduce.map.memory.mb=2560
 mapreduce.map.java.opts=-Xmx2048m
 mapreduce.reduce.memory.mb=2560
 mapreduce.reduce.java.opts=-Xmx2048m
 yarn.app.mapreduce.am.resource.mb=2560
 yarn.app.mapreduce.am.command-opts=-Xmx2048m
 mapreduce.task.io.sort.mb=1024
 tez.am.resource.memory.mb=2560
 tez.am.java.opts=-Xmx2048m
 hive.tez.container.size=2560
 hive.tez.java.opts=-Xmx2048m
 hive.auto.convert.join.noconditionaltask.size=671088000

本篇文章来源于 Linux公社网站(www.linuxidc.com) 原文链接:https://www.linuxidc.com/Linux/2014-09/106520.htm

;