Joke Collection Website - Blessing messages - Hadoop MapReduce Optimization and Resource Scheduler

Hadoop MapReduce Optimization and Resource Scheduler

Put all applications in one queue.

limit

All resources are divided into different queues in proportion.

Each queue can implement a separate scheduling policy.

superiority

divider

Capacity schedule

Use CapacityScheduler in yarn-site.xml settings.

Create capacity-scheduler.xml in the Hadoop profile directory /usr/local/hadoop/etc/hadoop and add the following information:

Configuration description

Fair scheduler

The purpose of fair scheduler is to:

Fair scheduler configuration method

Add the following information to the Hadoop configuration directory/usr/local/Hadoop/etc/Hadoop yarn-site.xml:

Create a new fair scheduling configuration file fair-scheduler.xml with the following information:

The above configuration uses the data_bi user name as the queue name for fair scheduling.

Parameter description of yarn-site.xml

Fair-scheduler.xml parameter description

Put all applications in one queue.

All resources are divided into different queues in proportion.

Each queue can implement a separate scheduling policy.

superiority

divider

Capacity schedule

Use CapacityScheduler in yarn-site.xml settings.

Create capacity-scheduler.xml in the Hadoop profile directory /usr/local/hadoop/etc/hadoop and add the following information:

Configuration description

Fair scheduler

The purpose of fair scheduler is to:

Fair scheduler configuration method

Add the following information to the Hadoop configuration directory/usr/local/Hadoop/etc/Hadoop yarn-site.xml:

Create a new fair scheduling configuration file fair-scheduler.xml with the following information:

The above configuration uses the data_bi user name as the queue name for fair scheduling.

Parameter description of yarn-site.xml

Fair-scheduler.xml parameter description

If Hadoop starts and neither named node can start, the namenode log will display the following error:

Hadoop HDFS was started with root, so Hadoop users can't access the file, so you can take the following actions to restore it.