Spark on Yarn
What is Spark Driver ??
http://www.it165.net/pro/html/201404/11952.html
3/17 - 3/18 there are two kinds of spark on yarn models, one is Spark on yarn client, and other one is Spark yarn cluster.
The differences between of them are ... points:
Yarn-client is running in client, AM
Yarn-cluster is running in cluster
In Yarn model:
Master is Resource Manager
Slave is Node Manager
In Yarn-cluster model:
Spark Driver as an ApplicationMaster run in YARN cluster first.
Then client will submit jobs in worker node in cluster. And new each applicationMaster for each job.
The AM will master life span of application.
Because Driver run in cluster, we do not need start Spark Master/Client --- So, we can not see result in client, but we can check in history server.
Better way is saving result in HDFS, not for stdout.
Steps: 1. client request for RM, and submit jar to HDFS.
( Link to RM, get metric,queue, resource from RM ASManager; uploade app jar and spark-assembly jar; set container context, and lunch-conainer.sh....)
2. RM ask resource from NM, the create Spark ApplicationMaster(AM) each SC has one AM
3.
In Yarn-client model:
If user kill the terminal, it will kill the spark applications.
Local Driver(in the client) will communicate with all executor container, then get all the results.
** What is Application Master(AM)
A special piece of code
Helps coordinate tasks on the Yarn cluster.
First process run after the application
NodeManager is used for lunching container from AM
first RM sends Container to AM
AM let NM lunch container ...
Application , Container, Local...
Metrics
SparkListening Bus:
https://github.com/apache/spark/blob/8ef3399aff04bf8b7ab294c0f55bcf195995842b/core/src/main/scala/org/apache/spark/util/ListenerBus.scala
// An event bus which posts events to its listeners.
Asynchronously passes SparkListenerEvents to registered SparkListeners
即所有spark消息SparkListenerEvents 被异步的发送给已经注册过的SparkListeners.
在SparkContext中, 首先会创建LiveListenerBus实例,这个类主要功能如下:
- 保存有消息队列,负责消息的缓存
- 保存有注册过的listener,负责消息的分发
SparkUI and Spark history server --- Josn property ?
This is for spark event format.
https://github.com/apache/spark/blob/8ef3399aff04bf8b7ab294c0f55bcf195995842b/core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala
trait SparkListenerEvent {
/ Whether output this event to the event log /
protected[spark] def logEvent: Boolean = true
}
@DeveloperApi
case class SparkListenerTaskEnd(
stageId: Int,
stageAttemptId: Int,
taskType: String,
reason: TaskEndReason,
taskInfo: TaskInfo,
// may be null if the task has failed
@Nullable taskMetrics: TaskMetrics)
extends SparkListenerEvent
.....
}
This is for eventlog transfomr to JSON format.
https://github.com/apache/spark/blob/eeaf45b92695c577279f3a17d8c80ee40425e9aa/core/src/main/scala/org/apache/spark/util/JsonProtocol.scala
def taskEndToJson(taskEnd: SparkListenerTaskEnd): JValue = {
val taskEndReason = taskEndReasonToJson(taskEnd.reason)
val taskInfo = taskEnd.taskInfo
val taskMetrics = taskEnd.taskMetrics
val taskMetricsJson = if (taskMetrics != null) taskMetricsToJson(taskMetrics) else JNothing
("Event" -> Utils.getFormattedClassName(taskEnd)) ~
("Stage ID" -> taskEnd.stageId) ~
("Stage Attempt ID" -> taskEnd.stageAttemptId) ~
("Task Type" -> taskEnd.taskType) ~
("Task End Reason" -> taskEndReason) ~
("Task Info" -> taskInfoToJson(taskInfo)) ~
("Task Metrics" -> taskMetricsJson)
}
In DAGScheduler
eventQueue, DAGScheduler send event to eventQueue.
eventLoop Thread, take out event from eventQueue, and deal with it.
impliement TaskSchedulerListener, and register it to TaskScheduler, TaskScheduler can call TaskSchedulerListener anytime. : TaskSchedulerListener的实现其实也就是post各种event到eventQueue
In AKKA, the Event Bus,
http://doc.akka.io/docs/akka/snapshot/scala/event-bus.html
Summary:
1)in lunching of SparkContext,EventLoggingListener will be created by SParkContext.
2)Listener will create a log file for Application : spark.eventLog.dir + "/" + applicationId;
3)Each Application file has :
a) APPLICATION_COMPLETE (after SparkContext.stop())
b) EVENT_LOG_x
c) SPARK_VERSION_xxx
d) COMPRESSION_CODEC_xxx (set up spark.eventLog.compress)
4)Application get Event requestion, then record event log into EVENT_LOG_X.
WebUI initialize in the SparkContext
// Initialize the Spark UI , registering all
associated listeners
private [spark] val ui = new SparkUI (this)
ui.bind ()
def bind () {
assert (! serverInfo .isDefined , " Attempted to bind %
s more than once!". format ( className ))
try {
// JettyServer
serverInfo = Some( startJettyServer (" 0.0.0.0 ", port , handlers , conf))
logInfo (" Started %s at http ://%s:%d". format (
className , publicHostName , boundPort ))
} catch {
case e: Exception =>
logError (" Failed to bind %s". format ( className )
, e)
System .exit (1)
}
}
Use SparkListener to get data.
class JobProgressListener(conf: SparkConf) extends SparkListener with Logging {
import JobProgressListener._
type JobId = Int
type StageId = Int
type StageAttemptId = Int
type PoolName = String
type ExecutorId = String
// Jobs:
val activeJobs = new HashMap[JobId, JobUIData]
val completedJobs = ListBufferJobUIData
val failedJobs = ListBufferJobUIData
val jobIdToData = new HashMap[JobId, JobUIData]
// Stages:
val activeStages = new HashMap[StageId, StageInfo]
val completedStages = ListBufferStageInfo
val skippedStages = ListBufferStageInfo
val failedStages = ListBufferStageInfo
val stageIdToData = new HashMap[(StageId, StageAttemptId), StageUIData]
val stageIdToInfo = new HashMap[StageId, StageInfo]
val stageIdToActiveJobIds = new HashMap[StageId, HashSet[JobId]]
val poolToActiveStages = HashMapPoolName, HashMap[StageId, StageInfo]
var numCompletedStages = 0
var numFailedStages = 0
// Misc:
val executorIdToBlockManagerId = HashMapExecutorId, BlockManagerId
def blockManagerIds = executorIdToBlockManagerId.values.toSeq
var schedulingMode: Option[SchedulingMode] = None
// number of non-active jobs and stages (there is no limit for active jobs and stages):
val retainedStages = conf.getInt("spark.ui.retainedStages", DEFAULT_RETAINED_STAGES)
val retainedJobs = conf.getInt("spark.ui.retainedJobs", DEFAULT_RETAINED_JOBS)
[DEBUG] [2016-03-16 15:48:38] [Logging$class:logDebug:63] Using the default YARN application classpath: $HADOOPCONFDIR,$HADOOPCOMMONHOME/share/hadoop/common/,$HADOOP_COMMON_HOME/share/hadoop/common/lib/,$HADOOPHDFSHOME/share/hadoop/hdfs/,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/,$HADOOPYARNHOME/share/hadoop/yarn/,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/
[DEBUG] [2016-03-16 15:48:38] [Logging$class:logDebug:63] Using the default MR application classpath: $HADOOPMAPREDHOME/share/hadoop/mapreduce/,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/
[INFO ] [2016-03-16 15:48:38] [Logging$class:logInfo:59] Preparing resources for our AM container
[INFO ] [2016-03-16 15:48:38] [Logging$class:logInfo:59] Uploading resource file:/usr/local/src/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar -> hdfs://master:9000/user/root/.sparkStaging/application14581555404020009/spark-assembly-1.5.1-hadoop2.6.0.jar
[INFO ] [2016-03-16 15:48:42] [Logging$class:logInfo:59] Uploading resource file:/usr/local/src/hadoop-2.6.0/WordCount.jar -> hdfs://master:9000/user/root/.sparkStaging/application14581555404020009/WordCount.jar
[INFO ] [2016-03-16 15:48:43] [Logging$class:logInfo:59] Uploading resource file:/tmp/spark-6b276128-29b5-431c-9907-72b9d8e0aca5/spark_conf4511951434652813078.zip -> hdfs://master:9000/user/root/.sparkStaging/application14581555404020009/spark_conf4511951434652813078.zip
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] ===============================================================================
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] YARN AM launch context:
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] user class: com.tolon.spark.SlowNode
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] env:
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] CLASSPATH -> /spark_conf/__spark.jar$HADOOP_CONF_DIR$HADOOP_COMMON_HOME/share/hadoop/common/$HADOOP_COMMON_HOME/share/hadoop/common/lib/$HADOOP_HDFS_HOME/share/hadoop/hdfs/$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/$HADOOP_YARN_HOME/share/hadoop/yarn/$HADOOP_YARN_HOME/share/hadoop/yarn/lib/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183999807,17079
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1458155540402_0009
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_USER -> root
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_MODE -> true
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1458157719714,1458157722722
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] SPARK_YARN_CACHE_FILES -> hdfs://master:9000/user/root/.sparkStaging/application_1458155540402_0009/spark-assembly-1.5.1-hadoop2.6.0.jar#__spark.jar,hdfs://master:9000/user/root/.sparkStaging/application_1458155540402_0009/WordCount.jar#__app.jar
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] resources:
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] __app.jar -> resource { scheme: "hdfs" host: "master" port: 9000 file: "/user/root/.sparkStaging/application_1458155540402_0009/WordCount.jar" } size: 17079 timestamp: 1458157722722 type: FILE visibility: PRIVATE
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] __spark.jar -> resource { scheme: "hdfs" host: "master" port: 9000 file: "/user/root/.sparkStaging/application_1458155540402_0009/spark-assembly-1.5.1-hadoop2.6.0.jar" } size: 183999807 timestamp: 1458157719714 type: FILE visibility: PRIVATE
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] __spark_conf -> resource { scheme: "hdfs" host: "master" port: 9000 file: "/user/root/.sparkStaging/application_1458155540402_0009/spark_conf4511951434652813078.zip" } size: 84093 timestamp: 1458157725461 type: ARCHIVE visibility: PRIVATE
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] command:
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] /bin/java -server -Xmx1024m -Djava.io.tmpdir=/tmp -Dspark.yarn.app.container.log.dir= org.apache.spark.deploy.yarn.ApplicationMaster --class 'com.tolon.spark.SlowNode' --jar file:/usr/local/src/hadoop-2.6.0/WordCount.jar --executor-memory 1024m --executor-cores 1 --properties-file /__spark_conf/__spark_conf.properties 1> /stdout 2> /stderr
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] ===============================================================================
[INFO ] [2016-03-16 15:48:45] [Logging$class:logInfo:59] Changing view acls to: root
[INFO ] [2016-03-16 15:48:45] [Logging$class:logInfo:59] Changing modify acls to: root
[INFO ] [2016-03-16 15:48:45] [Logging$class:logInfo:59] SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
[DEBUG] [2016-03-16 15:48:45] [Logging$class:logDebug:63] No SSL protocol specified
[DEBUG] [2016-03-16 15:48:46] [Logging$class:logDebug:63] No SSL protocol specified
[DEBUG] [2016-03-16 15:48:46] [Logging$class:logDebug:63] No SSL protocol specified
[DEBUG] [2016-03-16 15:48:46] [Logging$class:logDebug:63] SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
[DEBUG] [2016-03-16 15:48:46] [Logging$class:logDebug:63] SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()}
[DEBUG] [2016-03-16 15:48:46] [Logging$class:logDebug:63] spark.yarn.maxAppAttempts is not set. Cluster's default value will be used.
[INFO ] [2016-03-16 15:48:46] [Logging$class:logInfo:59] Submitting application 9 to ResourceManager
[INFO ] [2016-03-16 15:48:46] [YarnClientImpl:submitApplication:251] Submitted application application_1458155540402_0009
[INFO ] [2016-03-16 15:48:47] [Logging$class:logInfo:59] Application report for application_1458155540402_0009 (state: ACCEPTED)
[DEBUG] [2016-03-16 15:48:47] [Logging$class:logDebug:63]
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1458157726132
final status: UNDEFINED
tracking URL: http://Master.Bing:8088/proxy/application_1458155540402_0009/
user: root
[INFO ] [2016-03-16 15:48:48] [Logging$class:logInfo:59] Application report for application_1458155540402_0009 (state: ACCEPTED)
[DEBUG] [2016-03-16 15:48:48] [Logging$class:logDebug:63]
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1458157726132
final status: UNDEFINED
tracking URL: http://Master.Bing:8088/proxy/application_1458155540402_0009/
user: root
[INFO ] [2016-03-16 15:48:49] [Logging$class:logInfo:59] Application report for application_1458155540402_0009 (state: ACCEPTED)
[DEBUG] [2016-03-16 15:48:49] [Logging$class:logDebug:63]
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1458157726132
final status: UNDEFINED
tracking URL: http://Master.Bing:8088/proxy/application_1458155540402_0009/
user: root
[INFO ] [2016-03-16 15:48:50] [Logging$class:logInfo:59] Application report for application_1458155540402_0009 (state: ACCEPTED)
[DEBUG] [2016-03-16 15:48:50] [Logging$class:logDebug:63]
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
hadoop-root-namenode-Master.Bing.log
2016-03-16 23:39:07,858 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-16 23:39:37,859 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2016-03-16 23:39:37,859 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2016-03-16 23:40:07,859 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
hadoop-root-resourcemanager-Master.Bing.log
2016-03-22 02:38:20,574 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 5
2016-03-22 02:38:25,744 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 5 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 5 submitted by user root
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=192.168.1.100 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application14586276931950005
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application14586276931950005
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from NEW to NEW_SAVING
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1458627693195_0005
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from NEW_SAVING to SUBMITTED
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1458627693195_0005 user: root leaf-queue of parent: root #applications: 1
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1458627693195_0005 from user: root, in queue: default
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from SUBMITTED to ACCEPTED
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1458627693195_0005_000001
2016-03-22 02:38:25,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from NEW to SUBMITTED
2016-03-22 02:38:25,746 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1458627693195_0005 from user: root activated in queue: default
2016-03-22 02:38:25,746 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1458627693195_0005 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@4dba684b, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2016-03-22 02:38:25,746 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1458627693195_0005_000001 to scheduler from user root in queue default
2016-03-22 02:38:25,746 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from SUBMITTED to SCHEDULED
2016-03-22 02:38:26,319 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000001 Container Transitioned from NEW to ALLOCATED
2016-03-22 02:38:26,319 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000001
2016-03-22 02:38:26,319 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1458627693195_0005_01_000001 of capacity on host Node2.Bing:39287, which has 1 containers, used and available after allocation
2016-03-22 02:38:26,320 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1458627693195_0005_000001 container=Container: [ContainerId: container_1458627693195_0005_01_000001, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=
2016-03-22 02:38:26,320 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2016-03-22 02:38:26,320 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used= cluster=
2016-03-22 02:38:26,321 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : Node2.Bing:39287 for container : container_1458627693195_0005_01_000001
2016-03-22 02:38:26,325 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-03-22 02:38:26,325 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1458627693195_0005_000001
2016-03-22 02:38:26,325 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1458627693195_0005 AttemptId: appattempt_1458627693195_0005_000001 MasterContainer: Container: [ContainerId: container_1458627693195_0005_01_000001, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.1.102:39287 }, ]
2016-03-22 02:38:26,325 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from SCHEDULED to ALLOCATED_SAVING
2016-03-22 02:38:26,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from ALLOCATED_SAVING to ALLOCATED
2016-03-22 02:38:26,326 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1458627693195_0005_000001
2016-03-22 02:38:26,330 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1458627693195_0005_01_000001, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.1.102:39287 }, ] for AM appattempt_1458627693195_0005_000001
2016-03-22 02:38:26,330 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1458627693195_0005_01_000001 : /bin/java,-server,-Xmx1024m,-Djava.io.tmpdir=/tmp,-Dspark.yarn.app.container.log.dir=,org.apache.spark.deploy.yarn.ApplicationMaster,--class,'com.tolon.spark.SlowNode',--jar,file:/usr/local/src/hadoop-2.6.0/WordCount.jar,--executor-memory,1024m,--executor-cores,1,--properties-file,/__spark_conf/__spark_conf.properties,1>,/stdout,2>,/stderr
2016-03-22 02:38:26,330 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1458627693195_0005_000001
2016-03-22 02:38:26,330 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1458627693195_0005_000001
2016-03-22 02:38:26,366 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1458627693195_0005_01_000001, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.1.102:39287 }, ] for AM appattempt_1458627693195_0005_000001
2016-03-22 02:38:26,366 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from ALLOCATED to LAUNCHED
2016-03-22 02:38:27,322 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000001 Container Transitioned from ACQUIRED to RUNNING
2016-03-22 02:38:32,535 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1458627693195_0005_000001 (auth:SIMPLE)
2016-03-22 02:38:32,548 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1458627693195_0005_000001
2016-03-22 02:38:32,548 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root IP=192.168.1.102 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=application_1458627693195_0005 APPATTEMPTID=appattempt_1458627693195_0005_000001
2016-03-22 02:38:32,554 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from LAUNCHED to RUNNING
2016-03-22 02:38:32,554 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from ACCEPTED to RUNNING
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000002 Container Transitioned from NEW to ALLOCATED
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000002
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1458627693195_0005_01_000002 of capacity on host Node2.Bing:39287, which has 2 containers, used and available after allocation
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1458627693195_0005_000001 container=Container: [ContainerId: container_1458627693195_0005_01_000002, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 clusterResource=
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=2
2016-03-22 02:38:33,332 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used= cluster=
2016-03-22 02:38:33,623 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000003 Container Transitioned from NEW to ALLOCATED
2016-03-22 02:38:33,623 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000003
2016-03-22 02:38:33,623 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1458627693195_0005_01_000003 of capacity on host Node1.Bing:52114, which has 1 containers, used and available after allocation
2016-03-22 02:38:33,623 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1458627693195_0005_000001 container=Container: [ContainerId: container_1458627693195_0005_01_000003, NodeId: Node1.Bing:52114, NodeHttpAddress: Node1.Bing:8042, Resource: , Priority: 1, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=2 clusterResource=
2016-03-22 02:38:33,623 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=3
2016-03-22 02:38:33,624 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used= cluster=
2016-03-22 02:38:34,081 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : Node2.Bing:39287 for container : container_1458627693195_0005_01_000002
2016-03-22 02:38:34,086 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000002 Container Transitioned from ALLOCATED to ACQUIRED
2016-03-22 02:38:34,087 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : Node1.Bing:52114 for container : container_1458627693195_0005_01_000003
2016-03-22 02:38:34,088 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000003 Container Transitioned from ALLOCATED to ACQUIRED
2016-03-22 02:38:34,625 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000003 Container Transitioned from ACQUIRED to RUNNING
2016-03-22 02:38:35,336 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000002 Container Transitioned from ACQUIRED to RUNNING
2016-03-22 02:38:37,120 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate...
2016-03-22 02:39:02,096 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1458627693195_0005_000001 with final state: FINISHING, and exit status: -1000
2016-03-22 02:39:02,096 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from RUNNING to FINAL_SAVING
2016-03-22 02:39:02,096 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1458627693195_0005 with final state: FINISHING
2016-03-22 02:39:02,096 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from RUNNING to FINAL_SAVING
2016-03-22 02:39:02,097 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from FINAL_SAVING to FINISHING
2016-03-22 02:39:02,097 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1458627693195_0005
2016-03-22 02:39:02,097 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from FINAL_SAVING to FINISHING
2016-03-22 02:39:02,199 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: application_1458627693195_0005 unregistered successfully.
2016-03-22 02:39:02,392 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000002 Container Transitioned from RUNNING to COMPLETED
2016-03-22 02:39:02,392 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1458627693195_0005_01_000002 in state: COMPLETED event:FINISHED
2016-03-22 02:39:02,392 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000002
2016-03-22 02:39:02,392 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1458627693195_0005_01_000002 of capacity on host Node2.Bing:39287, which currently has 1 containers, used and available, release resources=true
2016-03-22 02:39:02,393 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used= numContainers=2 user=root user-resources=
2016-03-22 02:39:02,393 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1458627693195_0005_01_000002, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 1, Token: Token { kind: ContainerToken, service: 192.168.1.102:39287 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=2 cluster=
2016-03-22 02:39:02,393 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used= cluster=
2016-03-22 02:39:02,393 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=2
2016-03-22 02:39:02,393 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1458627693195_0005_000001 released container container_1458627693195_0005_01_000002 on node: host: Node2.Bing:39287 #containers=1 available=6144 used=2048 with event: FINISHED
2016-03-22 02:39:02,691 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000003 Container Transitioned from RUNNING to COMPLETED
2016-03-22 02:39:02,692 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1458627693195_0005_01_000003 in state: COMPLETED event:FINISHED
2016-03-22 02:39:02,692 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000003
2016-03-22 02:39:02,692 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1458627693195_0005_01_000003 of capacity on host Node1.Bing:52114, which currently has 0 containers, used and available, release resources=true
2016-03-22 02:39:02,692 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used= numContainers=1 user=root user-resources=
2016-03-22 02:39:02,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1458627693195_0005_01_000003, NodeId: Node1.Bing:52114, NodeHttpAddress: Node1.Bing:8042, Resource: , Priority: 1, Token: Token { kind: ContainerToken, service: 192.168.1.101:52114 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1 cluster=
2016-03-22 02:39:02,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.125 absoluteUsedCapacity=0.125 used= cluster=
2016-03-22 02:39:02,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.125, absoluteUsedCapacity=0.125, numApps=1, numContainers=1
2016-03-22 02:39:02,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1458627693195_0005_000001 released container container_1458627693195_0005_01_000003 on node: host: Node1.Bing:52114 #containers=0 available=8192 used=0 with event: FINISHED
2016-03-22 02:39:03,396 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1458627693195_0005_01_000001 Container Transitioned from RUNNING to COMPLETED
2016-03-22 02:39:03,396 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1458627693195_0005_000001
2016-03-22 02:39:03,396 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1458627693195_0005_01_000001 in state: COMPLETED event:FINISHED
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1458627693195_0005_000001
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1458627693195_0005 CONTAINERID=container_1458627693195_0005_01_000001
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1458627693195_0005_000001 State change from FINISHING to FINISHED
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1458627693195_0005_01_000001 of capacity on host Node2.Bing:39287, which currently has 0 containers, used and available, release resources=true
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1458627693195_0005 State change from FINISHING to FINISHED
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used= numContainers=0 user=root user-resources=
2016-03-22 02:39:03,397 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1458627693195_0005
2016-03-22 02:39:03,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1458627693195_0005_01_000001, NodeId: Node2.Bing:39287, NodeHttpAddress: Node2.Bing:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.1.102:39287 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=
2016-03-22 02:39:03,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used= cluster=
2016-03-22 02:39:03,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2016-03-22 02:39:03,398 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1458627693195_0005,name=com.tolon.spark.SlowNode,user=root,queue=default,state=FINISHED,trackingUrl=http://Master.Bing:8088/proxy/application_1458627693195_0005/A,appMasterHost=192.168.1.102,startTime=1458628705744,finishTime=1458628742096,finalStatus=SUCCEEDED
2016-03-22 02:39:03,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1458627693195_0005_000001 released container container_1458627693195_0005_01_000001 on node: host: Node2.Bing:39287 #containers=0 available=8192 used=0 with event: FINISHED
2016-03-22 02:39:03,399 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1458627693195_0005_000001
2016-03-22 02:39:03,399 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-22 02:39:03,400 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1458627693195_0005_000001 is done. finalState=FINISHED
2016-03-22 02:39:03,400 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1458627693195_0005 requests cleared
2016-03-22 02:39:03,401 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1458627693195_0005 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2016-03-22 02:39:03,401 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1458627693195_0005 user: root leaf-queue of parent: root #applications: 0
2016-03-22 02:39:03,418 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-22 02:39:03,418 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-22 02:39:03,693 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-22 02:39:04,422 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-22 02:39:04,694 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2016-03-23 02:21:33,204 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Rolling master-key for amrm-tokens
2016-03-23 02:21:33,204 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2016-03-23 02:21:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Going to activate master-key with key-id 292752455 in 900000ms
2016-03-23 02:21:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2016-03-23 02:21:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Going to activate master-key with key-id 317447414 in 900000ms
2016-03-23 02:21:35,882 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2016-03-23 02:21:35,882 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 3
2016-03-23 02:21:37,934 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2016-03-23 02:21:37,934 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2016-03-23 02:21:37,934 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2016-03-23 02:36:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Activating next master key with id: 288904787
2016-03-23 02:36:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Activating next master key with id: 292752455
2016-03-23 02:36:33,205 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Activating next master key with id: 317447414
2016