hive执行计划简单分析

原始SQL:

select a2.ISSUE_CODE   as ISSUE_CODE,
        a2.FZQDM        as FZQDM,
        a2.FZQLB        as FZQLB,
        a2.FJJDM        as FJJDM,
        a3.FSETCODE     as FSETCODE,
        a3.FSETID       as FSETID,
        a2.SRSCD        as SRSCD
      from (select t1.FSCDM  as ISSUE_CODE,--市场代码
                     t1.FZQDM as FZQDM,
                    (case when instr(t1.FZQLB, ‘非银行间‘) > 0
                          then ‘非银行间‘
                          else ‘银行间‘
                          end) as FZQLB,
                     t1.FJJDM as FJJDM,
                     t1.SRSCD as SRSCD
                     from (select
                               a1.FZQDM as FZQDM,
                               a1.FZQLB as FZQLB,
                               a1.FJJDM as FJJDM,
                               a1.SRSCD as SRSCD,
                               a1.FSCDM as FSCDM,
                               row_number() over(partition by a1.FZQDM,a1.SRSCD order by length(trim(a1.FSCDM)) desc,a1.FSCDM desc) sem
                               from TMP.CS_DWM_ISSU_SRC2STD_REL_H_05 a1
                              where a1.FJJDM is not null or a1.FJJDM = ‘ ‘) t1
                     where t1.sem=1
              ) a2
      left join (select distinct concat(‘A‘,lpad(t.FSETCODE,3,0)) as FSETCODE, --基金代码
                          t.FSETID as FSETID,   --套账号
                          t.SRSCD as SRSCD
                   from TMP.CS_DWM_ISSU_SRC2STD_REL_H_06 t
                        )a3
        on a3.FSETCODE = a2.FJJDM and a2.SRSCD=a3.SRSCD

执行计划:

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-5 depends on stages: Stage-1, Stage-3 , consists of Stage-6, Stage-2 /* Stage-5 依赖于流:Stage-1,Stage-3 ,由Stage-6,Stage-2组成*/
  Stage-6 has a backup stage: Stage-2  /*Stage-6 有一个备份流:Stage-2 */
  Stage-4 depends on stages: Stage-6   /*Stage-4 依赖于Stage-6*/
  Stage-2
  Stage-3 is a root stage
  Stage-0 depends on stages: Stage-4, Stage-2

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Map Operator Tree:
          TableScan
            alias: a1
            filterExpr: (fjjdm is not null or (fjjdm = ‘ ‘)) (type: boolean)
            Statistics: Num rows: 13830 Data size: 580901 Basic stats: COMPLETE Column stats: NONE
            Filter Operator
              predicate: (fjjdm is not null or (fjjdm = ‘ ‘)) (type: boolean)
              Statistics: Num rows: 13830 Data size: 580901 Basic stats: COMPLETE Column stats: NONE
              Reduce Output Operator
                key expressions: fzqdm (type: string), srscd (type: string), length(trim(fscdm)) (type: int), fscdm (type: string)
                sort order: ++--
                Map-reduce partition columns: fzqdm (type: string), srscd (type: string)
                Statistics: Num rows: 13830 Data size: 580901 Basic stats: COMPLETE Column stats: NONE
                value expressions: fzqlb (type: string), fjjdm (type: string)
      Reduce Operator Tree:
        Select Operator
          expressions: KEY.reducesinkkey3 (type: string), KEY.reducesinkkey0 (type: string), VALUE._col0 (type: string), VALUE._col1 (type: string), KEY.reducesinkkey1 (type: string)
          outputColumnNames: _col0, _col1, _col2, _col3, _col4
          Statistics: Num rows: 13830 Data size: 580901 Basic stats: COMPLETE Column stats: NONE
          PTF Operator
            Statistics: Num rows: 13830 Data size: 580901 Basic stats: COMPLETE Column stats: NONE
            Filter Operator
              predicate: (_wcol0 = 1) (type: boolean)
              Statistics: Num rows: 6915 Data size: 290450 Basic stats: COMPLETE Column stats: NONE
              Select Operator
                expressions: _col0 (type: string), _col1 (type: string), CASE WHEN ((instr(_col2, ‘非银行间‘) > 0)) THEN (‘非银行间‘) ELSE (‘银行间‘) END (type: string), _col3 (type: string), _col4 (type: string)
                outputColumnNames: _col0, _col1, _col2, _col3, _col4
                Statistics: Num rows: 6915 Data size: 290450 Basic stats: COMPLETE Column stats: NONE
                File Output Operator
                  compressed: false
                  table:
                      input format: org.apache.hadoop.mapred.SequenceFileInputFormat
                      output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
                      serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe

  Stage: Stage-5
    Conditional Operator   /* 条件运算符 ,是不是不知道什么意思,见Debug信息*/

  Stage: Stage-6
    Map Reduce Local Work
      Alias -> Map Local Tables:
        $INTNAME1
          Fetch Operator
            limit: -1
      Alias -> Map Local Operator Tree:
        $INTNAME1
          TableScan
            HashTable Sink Operator
              keys:
                0 _col3 (type: string), _col4 (type: string)
                1 _col0 (type: string), _col2 (type: string)

  Stage: Stage-4
    Map Reduce
      Map Operator Tree:
          TableScan
            Map Join Operator
              condition map:
                   Left Outer Join0 to 1
              keys:
                0 _col3 (type: string), _col4 (type: string)
                1 _col0 (type: string), _col2 (type: string)
              outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
              Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
              Select Operator
                expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col5 (type: string), _col6 (type: string), _col4 (type: string)
                outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
                Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
                File Output Operator
                  compressed: true
                  Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
                  table:
                      input format: org.apache.hadoop.mapred.TextInputFormat
                      output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
                      serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
      Local Work:
        Map Reduce Local Work

  Stage: Stage-2
    Map Reduce
      Map Operator Tree:
          TableScan
            Reduce Output Operator
              key expressions: _col3 (type: string), _col4 (type: string)
              sort order: ++
              Map-reduce partition columns: _col3 (type: string), _col4 (type: string)
              Statistics: Num rows: 6915 Data size: 290450 Basic stats: COMPLETE Column stats: NONE
              value expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
          TableScan
            Reduce Output Operator
              key expressions: _col0 (type: string), _col2 (type: string)
              sort order: ++
              Map-reduce partition columns: _col0 (type: string), _col2 (type: string)
              Statistics: Num rows: 318 Data size: 4648 Basic stats: COMPLETE Column stats: NONE
              value expressions: _col1 (type: string)
      Reduce Operator Tree:
        Join Operator
          condition map:
               Left Outer Join0 to 1
          keys:
            0 _col3 (type: string), _col4 (type: string)
            1 _col0 (type: string), _col2 (type: string)
          outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
          Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
          Select Operator
            expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col5 (type: string), _col6 (type: string), _col4 (type: string)
            outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
            Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
            File Output Operator
              compressed: true
              Statistics: Num rows: 7606 Data size: 319495 Basic stats: COMPLETE Column stats: NONE
              table:
                  input format: org.apache.hadoop.mapred.TextInputFormat
                  output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
                  serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  Stage: Stage-3
    Map Reduce
      Map Operator Tree:
          TableScan
            alias: t
            Statistics: Num rows: 636 Data size: 9296 Basic stats: COMPLETE Column stats: NONE
            Select Operator
              expressions: fsetcode (type: string), fsetid (type: string), srscd (type: string)
              outputColumnNames: fsetcode, fsetid, srscd
              Statistics: Num rows: 636 Data size: 9296 Basic stats: COMPLETE Column stats: NONE
              Group By Operator
                keys: concat(‘A‘, lpad(fsetcode, 3, 0)) (type: string), fsetid (type: string), srscd (type: string)
                mode: hash
                outputColumnNames: _col0, _col1, _col2
                Statistics: Num rows: 636 Data size: 9296 Basic stats: COMPLETE Column stats: NONE
                Reduce Output Operator
                  key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
                  sort order: +++
                  Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)
                  Statistics: Num rows: 636 Data size: 9296 Basic stats: COMPLETE Column stats: NONE
      Reduce Operator Tree:
        Group By Operator
          keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string)
          mode: mergepartial
          outputColumnNames: _col0, _col1, _col2
          Statistics: Num rows: 318 Data size: 4648 Basic stats: COMPLETE Column stats: NONE
          File Output Operator
            compressed: false
            table:
                input format: org.apache.hadoop.mapred.SequenceFileInputFormat
                output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
                serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe

  Stage: Stage-0
    Fetch Operator
      limit: -1
      Processor Tree:
        ListSink

在命令行里面启用了Hive的debug模式,由于debug模式信息输出比较多,我将信息数据到了文件里面:

hive -hiveconf hive.root.logger=INFO,console  -e "
 select a2.ISSUE_CODE   as ISSUE_CODE,
        a2.FZQDM        as FZQDM,
        a2.FZQLB        as FZQLB,
        a2.FJJDM        as FJJDM,
        a3.FSETCODE     as FSETCODE,
        a3.FSETID       as FSETID,
        a2.SRSCD        as SRSCD
      from (select t1.FSCDM  as ISSUE_CODE,--市场代码
                     t1.FZQDM as FZQDM,
                    (case when instr(t1.FZQLB, ‘非银行间‘) > 0
                          then ‘非银行间‘
                          else ‘银行间‘
                          end) as FZQLB,
                     t1.FJJDM as FJJDM,
                     t1.SRSCD as SRSCD
                     from (select
                               a1.FZQDM as FZQDM,
                               a1.FZQLB as FZQLB,
                               a1.FJJDM as FJJDM,
                               a1.SRSCD as SRSCD,
                               a1.FSCDM as FSCDM,
                               row_number() over(partition by a1.FZQDM,a1.SRSCD order by length(trim(a1.FSCDM)) desc,a1.FSCDM desc) sem
                               from TMP.CS_DWM_ISSU_SRC2STD_REL_H_05 a1
                              where a1.FJJDM is not null or a1.FJJDM = ‘ ‘) t1
                     where t1.sem=1
              ) a2
      left join (select distinct concat(‘A‘,lpad(t.FSETCODE,3,0)) as FSETCODE, --基金代码
                          t.FSETID as FSETID,   --套账号
                          t.SRSCD as SRSCD
                   from TMP.CS_DWM_ISSU_SRC2STD_REL_H_06 t
                        )a3
        on a3.FSETCODE = a2.FJJDM and a2.SRSCD=a3.SRSCD"   >> hive.query.debug 2>&1

查看 hive.query.debug文件:

Logging initialized using configuration in jar:file:/opt/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
16/12/20 22:15:58 [main]: INFO SessionState:
Logging initialized using configuration in jar:file:/opt/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
16/12/20 22:15:58 [main]: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/20 22:15:59 [main]: INFO hive.metastore: Trying to connect to metastore with URI thrift://qxy1:9083
16/12/20 22:15:59 [main]: INFO hive.metastore: Connected to metastore.
16/12/20 22:16:00 [main]: INFO session.SessionState: Created local directory: /home/hadoop/tmpdir/adaaea53-aac6-4dd7-b86c-53daed294f17_resources
16/12/20 22:16:00 [main]: INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17
16/12/20 22:16:00 [main]: INFO session.SessionState: Created local directory: /home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17
16/12/20 22:16:00 [main]: INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/_tmp_space.db
16/12/20 22:16:00 [main]: INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO parse.ParseDriver: Parsing command:
 select a2.ISSUE_CODE   as ISSUE_CODE,
        a2.FZQDM        as FZQDM,
        a2.FZQLB        as FZQLB,
        a2.FJJDM        as FJJDM,
        a3.FSETCODE     as FSETCODE,
        a3.FSETID       as FSETID,
        a2.SRSCD        as SRSCD
      from (select t1.FSCDM  as ISSUE_CODE,--市场代码
                     t1.FZQDM as FZQDM,
                    (case when instr(t1.FZQLB, ‘非银行间‘) > 0
                          then ‘非银行间‘
                          else ‘银行间‘
                          end) as FZQLB,
                     t1.FJJDM as FJJDM,
                     t1.SRSCD as SRSCD
                     from (select
                               a1.FZQDM as FZQDM,
                               a1.FZQLB as FZQLB,
                               a1.FJJDM as FJJDM,
                               a1.SRSCD as SRSCD,
                               a1.FSCDM as FSCDM,
                               row_number() over(partition by a1.FZQDM,a1.SRSCD order by length(trim(a1.FSCDM)) desc,a1.FSCDM desc) sem
                               from TMP.CS_DWM_ISSU_SRC2STD_REL_H_05 a1
                              where a1.FJJDM is not null or a1.FJJDM = ‘ ‘) t1
                     where t1.sem=1
              ) a2
      left join (select distinct concat(‘A‘,lpad(t.FSETCODE,3,0)) as FSETCODE, --基金代码
                          t.FSETID as FSETID,   --套账号
                          t.SRSCD as SRSCD
                   from TMP.CS_DWM_ISSU_SRC2STD_REL_H_06 t
                        )a3
        on a3.FSETCODE = a2.FJJDM and a2.SRSCD=a3.SRSCD
16/12/20 22:16:00 [main]: INFO parse.ParseDriver: Parse Completed
16/12/20 22:16:00 [main]: INFO log.PerfLogger: </PERFLOG method=parse start=1482300960101 end=1482300960807 duration=706 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:00 [main]: INFO parse.CalcitePlanner: Starting Semantic Analysis
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Completed phase 1 of Semantic Analysis
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for source tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for subqueries
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for source tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for subqueries
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for source tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for subqueries
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for destination tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for destination tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for source tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for subqueries
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for destination tables
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Get metadata for destination tables
16/12/20 22:16:01 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:01 [main]: INFO parse.CalcitePlanner: Completed getting MetaData in Semantic Analysis
16/12/20 22:16:01 [main]: INFO parse.BaseSemanticAnalyzer: Not invoking CBO because the statement has too few joins
16/12/20 22:16:01 [main]: INFO common.FileUtils: Creating directory if it doesn‘t exist: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10000/.hive-staging_hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO parse.CalcitePlanner: Set stats collection dir : hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10000/.hive-staging_hive_2016-12-20_22-16-00_099_8319005040929428210-1/-ext-10002
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for FS(19)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(18)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for JOIN(17)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for RS(15)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(8)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for FIL(7)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : t1
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (_col5 = 1)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(6)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of SEL For Alias : t1
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (row_number_window_0 = 1)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(5)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of SEL For Alias :
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (row_number_window_0 = 1)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for PTF(4)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for PTF(4)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of PTF For Alias :
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (row_number_window_0 = 1)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(3)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for RS(2)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for FIL(1)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of FIL For Alias : a1
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (fjjdm is not null or (fjjdm = ‘ ‘))
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for TS(0)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Pushdown Predicates of TS For Alias : a1
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory:     (fjjdm is not null or (fjjdm = ‘ ‘))
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for RS(16)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(14)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for GBY(13)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for RS(12)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for GBY(11)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for SEL(10)
16/12/20 22:16:02 [main]: INFO ppd.OpProcFactory: Processing for TS(9)
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1482300962166 end=1482300962169 duration=3 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: JOIN 17 oldExprs: {0=[Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[KEY.reducesinkkey0], Column[KEY.reducesinkkey1]], 1=[Column[KEY.reducesinkkey0], Column[VALUE._col0], Column[KEY.reducesinkkey1]]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: JOIN 17 newExprs: {0=[Column[VALUE._col0], Column[VALUE._col1], Column[VALUE._col2], Column[KEY.reducesinkkey0], Column[KEY.reducesinkkey1]], 1=[Column[KEY.reducesinkkey0], Column[VALUE._col0]]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 15 oldColExprMap: {VALUE._col2=Column[_col2], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], KEY.reducesinkkey0=Column[_col3], KEY.reducesinkkey1=Column[_col4]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 15 newColExprMap: {VALUE._col2=Column[_col2], VALUE._col0=Column[_col0], VALUE._col1=Column[_col1], KEY.reducesinkkey0=Column[_col3], KEY.reducesinkkey1=Column[_col4]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 16 oldColExprMap: {VALUE._col0=Column[_col1], KEY.reducesinkkey0=Column[_col0], KEY.reducesinkkey1=Column[_col2]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 16 newColExprMap: {VALUE._col0=Column[_col1], KEY.reducesinkkey0=Column[_col0], KEY.reducesinkkey1=Column[_col2]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 12 oldColExprMap: {KEY._col0=Column[_col0], KEY._col1=Column[_col1], KEY._col2=Column[_col2]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 12 newColExprMap: {KEY._col0=Column[_col0], KEY._col1=Column[_col1], KEY._col2=Column[_col2]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 2 oldColExprMap: {VALUE._col2=Column[BLOCK__OFFSET__INSIDE__FILE], VALUE._col3=Column[INPUT__FILE__NAME], VALUE._col4=Column[ROW__ID], VALUE._col0=Column[fzqlb], VALUE._col1=Column[fjjdm], KEY.reducesinkkey0=Column[fzqdm], KEY.reducesinkkey1=Column[srscd], KEY.reducesinkkey2=GenericUDFBridge(GenericUDFTrim(Column[fscdm])), KEY.reducesinkkey3=Column[fscdm]}
16/12/20 22:16:02 [main]: INFO optimizer.ColumnPrunerProcFactory: RS 2 newColExprMap: {VALUE._col0=Column[fzqlb], VALUE._col1=Column[fjjdm], KEY.reducesinkkey0=Column[fzqdm], KEY.reducesinkkey1=Column[srscd], KEY.reducesinkkey2=GenericUDFBridge(GenericUDFTrim(Column[fscdm])), KEY.reducesinkkey3=Column[fscdm]}
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=partition-retrieving start=1482300962235 end=1482300962235 duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO exec.Utilities: Cannot get size of hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003. Safely ignored.
16/12/20 22:16:02 [main]: INFO exec.Utilities: Cannot get size of hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10004. Safely ignored.
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=getInputSummary start=1482300962243 end=1482300962258 duration=15 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=clonePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO exec.Utilities: Serializing MapredWork via kryo
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482300962335 end=1482300962407 duration=72 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=deserializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO exec.Utilities: Deserializing MapredWork via kryo
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=deserializePlan start=1482300962407 end=1482300962435 duration=28 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=clonePlan start=1482300962265 end=1482300962435 duration=170 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO physical.LocalMapJoinProcFactory: Setting max memory usage to 0.9 for table sink not followed by group by
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Looking for table scans where optimization is applicable
16/12/20 22:16:02 [main]: INFO physical.NullScanTaskDispatcher: Found 0 null table scans
16/12/20 22:16:02 [main]: INFO parse.CalcitePlanner: Completed plan generation
16/12/20 22:16:02 [main]: INFO ql.Driver: Semantic Analysis Completed
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1482300960810 end=1482300962455 duration=1645 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO exec.ListSinkOperator: Initializing operator OP[36]
16/12/20 22:16:02 [main]: INFO exec.ListSinkOperator: Initialization Done 36 OP
16/12/20 22:16:02 [main]: INFO exec.ListSinkOperator: Operator 36 OP initialized
16/12/20 22:16:02 [main]: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:issue_code, type:string, comment:null), FieldSchema(name:fzqdm, type:string, comment:null), FieldSchema(name:fzqlb, type:string, comment:null), FieldSchema(name:fjjdm, type:string, comment:null), FieldSchema(name:fsetcode, type:string, comment:null), FieldSchema(name:fsetid, type:string, comment:null), FieldSchema(name:srscd, type:string, comment:null)], properties:null)
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=compile start=1482300960067 end=1482300962528 duration=2461 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO lockmgr.DummyTxnManager: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
16/12/20 22:16:02 [main]: INFO imps.CuratorFrameworkImpl: Starting
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:host.name=qxy1
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_77
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.8.0_77/jre
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.class.path=/opt/hadoop-2.6.2/etc/hadoop:/opt/hadoop-2.6.2/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/hadoop-annotations-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/hadoop-auth-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/hadoop-lzo-0.4.21-SNAPSHOT.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.2/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.2/share/hadoop/common/hadoop-common-2.6.2-tests.jar:/opt/hadoop-2.6.2/share/hadoop/common/hadoop-common-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/common/hadoop-nfs-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/hadoop-hdfs-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/hdfs/hadoop-hdfs-2.6.2-tests.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-api-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-common-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-client-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-common-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-registry-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.2.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.2-tests.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/wc.jar:/opt/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.2.jar:/opt/apache-hive-1.2.1-bin/conf:/opt/apache-hive-1.2.1-bin/lib/accumulo-core-1.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/accumulo-fate-1.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/accumulo-start-1.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/accumulo-trace-1.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/activation-1.1.jar:/opt/apache-hive-1.2.1-bin/lib/ant-1.9.1.jar:/opt/apache-hive-1.2.1-bin/lib/ant-launcher-1.9.1.jar:/opt/apache-hive-1.2.1-bin/lib/antlr-2.7.7.jar:/opt/apache-hive-1.2.1-bin/lib/antlr-runtime-3.4.jar:/opt/apache-hive-1.2.1-bin/lib/apache-log4j-extras-1.2.17.jar:/opt/apache-hive-1.2.1-bin/lib/asm-commons-3.1.jar:/opt/apache-hive-1.2.1-bin/lib/asm-tree-3.1.jar:/opt/apache-hive-1.2.1-bin/lib/avro-1.7.5.jar:/opt/apache-hive-1.2.1-bin/lib/bonecp-0.8.0.RELEASE.jar:/opt/apache-hive-1.2.1-bin/lib/calcite-avatica-1.2.0-incubating.jar:/opt/apache-hive-1.2.1-bin/lib/calcite-core-1.2.0-incubating.jar:/opt/apache-hive-1.2.1-bin/lib/calcite-linq4j-1.2.0-incubating.jar:/opt/apache-hive-1.2.1-bin/lib/commons-beanutils-1.7.0.jar:/opt/apache-hive-1.2.1-bin/lib/commons-beanutils-core-1.8.0.jar:/opt/apache-hive-1.2.1-bin/lib/commons-cli-1.2.jar:/opt/apache-hive-1.2.1-bin/lib/commons-codec-1.4.jar:/opt/apache-hive-1.2.1-bin/lib/commons-collections-3.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/commons-compiler-2.7.6.jar:/opt/apache-hive-1.2.1-bin/lib/commons-compress-1.4.1.jar:/opt/apache-hive-1.2.1-bin/lib/commons-configuration-1.6.jar:/opt/apache-hive-1.2.1-bin/lib/commons-dbcp-1.4.jar:/opt/apache-hive-1.2.1-bin/lib/commons-digester-1.8.jar:/opt/apache-hive-1.2.1-bin/lib/commons-httpclient-3.0.1.jar:/opt/apache-hive-1.2.1-bin/lib/commons-io-2.4.jar:/opt/apache-hive-1.2.1-bin/lib/commons-lang-2.6.jar:/opt/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar:/opt/apache-hive-1.2.1-bin/lib/commons-math-2.1.jar:/opt/apache-hive-1.2.1-bin/lib/commons-pool-1.5.4.jar:/opt/apache-hive-1.2.1-bin/lib/commons-vfs2-2.0.jar:/opt/apache-hive-1.2.1-bin/lib/curator-client-2.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/curator-framework-2.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/curator-recipes-2.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/datanucleus-api-jdo-3.2.6.jar:/opt/apache-hive-1.2.1-bin/lib/datanucleus-core-3.2.10.jar:/opt/apache-hive-1.2.1-bin/lib/datanucleus-rdbms-3.2.9.jar:/opt/apache-hive-1.2.1-bin/lib/derby-10.10.2.0.jar:/opt/apache-hive-1.2.1-bin/lib/eigenbase-properties-1.1.5.jar:/opt/apache-hive-1.2.1-bin/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/apache-hive-1.2.1-bin/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/apache-hive-1.2.1-bin/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/apache-hive-1.2.1-bin/lib/groovy-all-2.1.6.jar:/opt/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar:/opt/apache-hive-1.2.1-bin/lib/hamcrest-core-1.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-accumulo-handler-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-ant-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-beeline-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-cli-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-contrib-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-hbase-handler-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-hwi-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1-standalone.jar:/opt/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-serde-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-service-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-shims-0.20S-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-shims-0.23-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-shims-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-shims-common-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-shims-scheduler-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/hive-testutils-1.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/httpclient-4.4.jar:/opt/apache-hive-1.2.1-bin/lib/httpcore-4.4.jar:/opt/apache-hive-1.2.1-bin/lib/ivy-2.4.0.jar:/opt/apache-hive-1.2.1-bin/lib/janino-2.7.6.jar:/opt/apache-hive-1.2.1-bin/lib/jcommander-1.32.jar:/opt/apache-hive-1.2.1-bin/lib/jdo-api-3.0.1.jar:/opt/apache-hive-1.2.1-bin/lib/jetty-all-7.6.0.v20120127.jar:/opt/apache-hive-1.2.1-bin/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/apache-hive-1.2.1-bin/lib/jline-2.12.jar:/opt/apache-hive-1.2.1-bin/lib/joda-time-2.5.jar:/opt/apache-hive-1.2.1-bin/lib/jpam-1.1.jar:/opt/apache-hive-1.2.1-bin/lib/json-20090211.jar:/opt/apache-hive-1.2.1-bin/lib/jsr305-3.0.0.jar:/opt/apache-hive-1.2.1-bin/lib/jta-1.1.jar:/opt/apache-hive-1.2.1-bin/lib/junit-4.11.jar:/opt/apache-hive-1.2.1-bin/lib/libfb303-0.9.2.jar:/opt/apache-hive-1.2.1-bin/lib/libthrift-0.9.2.jar:/opt/apache-hive-1.2.1-bin/lib/log4j-1.2.16.jar:/opt/apache-hive-1.2.1-bin/lib/mail-1.4.1.jar:/opt/apache-hive-1.2.1-bin/lib/maven-scm-api-1.4.jar:/opt/apache-hive-1.2.1-bin/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/apache-hive-1.2.1-bin/lib/maven-scm-provider-svnexe-1.4.jar:/opt/apache-hive-1.2.1-bin/lib/mysql-connector-java-5.1.39.jar:/opt/apache-hive-1.2.1-bin/lib/netty-3.7.0.Final.jar:/opt/apache-hive-1.2.1-bin/lib/opencsv-2.3.jar:/opt/apache-hive-1.2.1-bin/lib/oro-2.0.8.jar:/opt/apache-hive-1.2.1-bin/lib/paranamer-2.3.jar:/opt/apache-hive-1.2.1-bin/lib/parquet-hadoop-bundle-1.6.0.jar:/opt/apache-hive-1.2.1-bin/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/apache-hive-1.2.1-bin/lib/plexus-utils-1.5.6.jar:/opt/apache-hive-1.2.1-bin/lib/regexp-1.3.jar:/opt/apache-hive-1.2.1-bin/lib/servlet-api-2.5.jar:/opt/apache-hive-1.2.1-bin/lib/snappy-java-1.0.5.jar:/opt/apache-hive-1.2.1-bin/lib/ST4-4.0.4.jar:/opt/apache-hive-1.2.1-bin/lib/stax-api-1.0.1.jar:/opt/apache-hive-1.2.1-bin/lib/stringtemplate-3.2.1.jar:/opt/apache-hive-1.2.1-bin/lib/super-csv-2.2.0.jar:/opt/apache-hive-1.2.1-bin/lib/tempus-fugit-1.1.jar:/opt/apache-hive-1.2.1-bin/lib/velocity-1.5.jar:/opt/apache-hive-1.2.1-bin/lib/xz-1.0.jar:/opt/apache-hive-1.2.1-bin/lib/zookeeper-3.4.6.jar::/opt/hadoop-2.6.2//contrib/capacity-scheduler/*.jar
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop-2.6.2//lib/native:/opt/hadoop-2.6.2/lib/native
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.el6.x86_64
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
16/12/20 22:16:02 [main]: INFO zookeeper.ZooKeeper: Initiating client connection, connectString=qxy1:2181,qxy2:2181,qxy3:2181 sessionTimeout=1200000 [email protected]
16/12/20 22:16:02 [main-SendThread(qxy2:2181)]: INFO zookeeper.ClientCnxn: Opening socket connection to server qxy2/192.168.233.160:2181. Will not attempt to authenticate using SASL (unknown error)
16/12/20 22:16:02 [main-SendThread(qxy2:2181)]: INFO zookeeper.ClientCnxn: Socket connection established to qxy2/192.168.233.160:2181, initiating session
16/12/20 22:16:02 [main-SendThread(qxy2:2181)]: INFO zookeeper.ClientCnxn: Session establishment complete on server qxy2/192.168.233.160:2181, sessionid = 0x2591f661ea30000, negotiated timeout = 40000
16/12/20 22:16:02 [main-EventThread]: INFO state.ConnectionStateManager: State change: CONNECTED
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1482300962528 end=1482300962777 duration=249 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO ql.Driver: Starting command(queryId=hadoop_20161220221600_6290d359-2941-4748-84f6-75f6fc444d71):
 select a2.ISSUE_CODE   as ISSUE_CODE,
        a2.FZQDM        as FZQDM,
        a2.FZQLB        as FZQLB,
        a2.FJJDM        as FJJDM,
        a3.FSETCODE     as FSETCODE,
        a3.FSETID       as FSETID,
        a2.SRSCD        as SRSCD
      from (select t1.FSCDM  as ISSUE_CODE,--市场代码
                     t1.FZQDM as FZQDM,
                    (case when instr(t1.FZQLB, ‘非银行间‘) > 0
                          then ‘非银行间‘
                          else ‘银行间‘
                          end) as FZQLB,
                     t1.FJJDM as FJJDM,
                     t1.SRSCD as SRSCD
                     from (select
                               a1.FZQDM as FZQDM,
                               a1.FZQLB as FZQLB,
                               a1.FJJDM as FJJDM,
                               a1.SRSCD as SRSCD,
                               a1.FSCDM as FSCDM,
                               row_number() over(partition by a1.FZQDM,a1.SRSCD order by length(trim(a1.FSCDM)) desc,a1.FSCDM desc) sem
                               from TMP.CS_DWM_ISSU_SRC2STD_REL_H_05 a1
                              where a1.FJJDM is not null or a1.FJJDM = ‘ ‘) t1
                     where t1.sem=1
              ) a2
      left join (select distinct concat(‘A‘,lpad(t.FSETCODE,3,0)) as FSETCODE, --基金代码
                          t.FSETID as FSETID,   --套账号
                          t.SRSCD as SRSCD
                   from TMP.CS_DWM_ISSU_SRC2STD_REL_H_06 t
                        )a3
        on a3.FSETCODE = a2.FJJDM and a2.SRSCD=a3.SRSCD
Query ID = hadoop_20161220221600_6290d359-2941-4748-84f6-75f6fc444d71
16/12/20 22:16:02 [main]: INFO ql.Driver: Query ID = hadoop_20161220221600_6290d359-2941-4748-84f6-75f6fc444d71
Total jobs = 4
16/12/20 22:16:02 [main]: INFO ql.Driver: Total jobs = 4
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1482300960067 end=1482300962800 duration=2733 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
Launching Job 1 out of 4
16/12/20 22:16:02 [main]: INFO ql.Driver: Launching Job 1 out of 4
16/12/20 22:16:02 [main]: INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial mode
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
16/12/20 22:16:02 [main]: INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev b67c96f080d1ca98525a61000186eaf3fba959d3]
16/12/20 22:16:02 [main]: INFO exec.Utilities: Cache Content Summary for hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_05 length: 2016043 file count: 1 directory count: 1
16/12/20 22:16:02 [main]: INFO log.PerfLogger: </PERFLOG method=getInputSummary start=1482300962817 end=1482300962907 duration=90 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO exec.Utilities: BytesPerReducer=256000000 maxReducers=1009 totalInputFileSize=2016043
Number of reduce tasks not specified. Estimated from input data size: 1
16/12/20 22:16:02 [main]: INFO exec.Task: Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
16/12/20 22:16:02 [main]: INFO exec.Task: In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
16/12/20 22:16:02 [main]: INFO exec.Task:   set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
16/12/20 22:16:02 [main]: INFO exec.Task: In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
16/12/20 22:16:02 [main]: INFO exec.Task:   set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
16/12/20 22:16:02 [main]: INFO exec.Task: In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
16/12/20 22:16:02 [main]: INFO exec.Task:   set mapreduce.job.reduces=<number>
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/12/20 22:16:02 [main]: INFO exec.Utilities: Processing alias a2:t1:a1
16/12/20 22:16:02 [main]: INFO exec.Utilities: Adding input file hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_05
16/12/20 22:16:02 [main]: INFO exec.Utilities: Content Summary hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_05length: 2016043 num files: 1 num directories: 1
16/12/20 22:16:02 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:02 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:02 [main]: INFO exec.Utilities: Serializing MapWork via kryo
16/12/20 22:16:03 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482300962994 end=1482300963242 duration=248 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:03 [main]: INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
16/12/20 22:16:03 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:03 [main]: INFO exec.Utilities: Serializing ReduceWork via kryo
16/12/20 22:16:03 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482300963253 end=1482300963292 duration=39 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:03 [main]: ERROR mr.ExecDriver: yarn
16/12/20 22:16:03 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:03 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:03 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10008/20552a25-93da-4705-9630-9664220b1407/map.xml
16/12/20 22:16:03 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10008/20552a25-93da-4705-9630-9664220b1407/reduce.xml
16/12/20 22:16:04 [main]: WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/20 22:16:04 [main]: INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:04 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10008/20552a25-93da-4705-9630-9664220b1407/map.xml
16/12/20 22:16:04 [main]: INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/12/20 22:16:04 [main]: INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_05; using filter path hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_05
16/12/20 22:16:04 [main]: INFO input.FileInputFormat: Total input paths to process : 1
16/12/20 22:16:04 [main]: INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 2, size left: 0
16/12/20 22:16:04 [main]: INFO io.CombineHiveInputFormat: number of splits 1
16/12/20 22:16:04 [main]: INFO io.CombineHiveInputFormat: Number of all splits 1
16/12/20 22:16:04 [main]: INFO log.PerfLogger: </PERFLOG method=getSplits start=1482300964426 end=1482300964483 duration=57 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:04 [main]: INFO mapreduce.JobSubmitter: number of splits:1
16/12/20 22:16:04 [main]: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1482284999685_0020
16/12/20 22:16:05 [main]: INFO impl.YarnClientImpl: Submitted application application_1482284999685_0020
16/12/20 22:16:05 [main]: INFO mapreduce.Job: The url to track the job: http://qxy1:8088/proxy/application_1482284999685_0020/
Starting Job = job_1482284999685_0020, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0020/
16/12/20 22:16:05 [main]: INFO exec.Task: Starting Job = job_1482284999685_0020, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0020/
Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0020
16/12/20 22:16:05 [main]: INFO exec.Task: Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0020
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
16/12/20 22:16:09 [main]: INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
16/12/20 22:16:09 [main]: WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-12-20 22:16:09,470 Stage-1 map = 0%,  reduce = 0%
16/12/20 22:16:09 [main]: INFO exec.Task: 2016-12-20 22:16:09,470 Stage-1 map = 0%,  reduce = 0%
2016-12-20 22:16:15,801 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.31 sec
16/12/20 22:16:15 [main]: INFO exec.Task: 2016-12-20 22:16:15,801 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.31 sec
2016-12-20 22:16:22,059 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.71 sec
16/12/20 22:16:22 [main]: INFO exec.Task: 2016-12-20 22:16:22,059 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.71 sec
MapReduce Total cumulative CPU time: 4 seconds 710 msec
16/12/20 22:16:23 [main]: INFO exec.Task: MapReduce Total cumulative CPU time: 4 seconds 710 msec
Ended Job = job_1482284999685_0020
16/12/20 22:16:23 [main]: INFO exec.Task: Ended Job = job_1482284999685_0020
16/12/20 22:16:23 [main]: INFO exec.FileSinkOperator: Moving tmp dir: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/_tmp.-mr-10003 to: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003
16/12/20 22:16:23 [main]: INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-3 from=org.apache.hadoop.hive.ql.Driver>
Launching Job 2 out of 4
16/12/20 22:16:23 [main]: INFO ql.Driver: Launching Job 2 out of 4
16/12/20 22:16:23 [main]: INFO ql.Driver: Starting task [Stage-3:MAPRED] in serial mode
16/12/20 22:16:23 [main]: INFO log.PerfLogger: <PERFLOG method=getInputSummary from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: INFO exec.Utilities: Cache Content Summary for hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_06 length: 8533 file count: 1 directory count: 1
16/12/20 22:16:23 [main]: INFO log.PerfLogger: </PERFLOG method=getInputSummary start=1482300983165 end=1482300983175 duration=10 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: INFO exec.Utilities: BytesPerReducer=256000000 maxReducers=1009 totalInputFileSize=8533
Number of reduce tasks not specified. Estimated from input data size: 1
16/12/20 22:16:23 [main]: INFO exec.Task: Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
16/12/20 22:16:23 [main]: INFO exec.Task: In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
16/12/20 22:16:23 [main]: INFO exec.Task:   set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
16/12/20 22:16:23 [main]: INFO exec.Task: In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
16/12/20 22:16:23 [main]: INFO exec.Task:   set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
16/12/20 22:16:23 [main]: INFO exec.Task: In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
16/12/20 22:16:23 [main]: INFO exec.Task:   set mapreduce.job.reduces=<number>
16/12/20 22:16:23 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:23 [main]: INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/12/20 22:16:23 [main]: INFO exec.Utilities: Processing alias a3:t
16/12/20 22:16:23 [main]: INFO exec.Utilities: Adding input file hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_06
16/12/20 22:16:23 [main]: INFO exec.Utilities: Content Summary hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_06length: 8533 num files: 1 num directories: 1
16/12/20 22:16:23 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:23 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: INFO exec.Utilities: Serializing MapWork via kryo
16/12/20 22:16:23 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482300983183 end=1482300983207 duration=24 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: INFO exec.Utilities: Serializing ReduceWork via kryo
16/12/20 22:16:23 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482300983210 end=1482300983223 duration=13 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:23 [main]: ERROR mr.ExecDriver: yarn
16/12/20 22:16:23 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:23 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:23 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10010/d4f8779f-af54-4a23-9cb7-90d74fe31387/map.xml
16/12/20 22:16:23 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10010/d4f8779f-af54-4a23-9cb7-90d74fe31387/reduce.xml
16/12/20 22:16:23 [main]: WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/20 22:16:23 [main]: INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:23 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10010/d4f8779f-af54-4a23-9cb7-90d74fe31387/map.xml
16/12/20 22:16:23 [main]: INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/12/20 22:16:23 [main]: INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_06; using filter path hdfs://qxy1/user/hive/warehouse/tmp.db/cs_dwm_issu_src2std_rel_h_06
16/12/20 22:16:23 [main]: INFO input.FileInputFormat: Total input paths to process : 1
16/12/20 22:16:23 [main]: INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 2, size left: 0
16/12/20 22:16:23 [main]: INFO io.CombineHiveInputFormat: number of splits 1
16/12/20 22:16:23 [main]: INFO io.CombineHiveInputFormat: Number of all splits 1
16/12/20 22:16:23 [main]: INFO log.PerfLogger: </PERFLOG method=getSplits start=1482300983508 end=1482300983515 duration=7 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:23 [main]: INFO mapreduce.JobSubmitter: number of splits:1
16/12/20 22:16:23 [main]: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1482284999685_0021
16/12/20 22:16:23 [main]: INFO impl.YarnClientImpl: Submitted application application_1482284999685_0021
16/12/20 22:16:23 [main]: INFO mapreduce.Job: The url to track the job: http://qxy1:8088/proxy/application_1482284999685_0021/
Starting Job = job_1482284999685_0021, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0021/
16/12/20 22:16:23 [main]: INFO exec.Task: Starting Job = job_1482284999685_0021, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0021/
Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0021
16/12/20 22:16:23 [main]: INFO exec.Task: Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0021
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
16/12/20 22:16:31 [main]: INFO exec.Task: Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
16/12/20 22:16:31 [main]: WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-12-20 22:16:31,935 Stage-3 map = 0%,  reduce = 0%
16/12/20 22:16:31 [main]: INFO exec.Task: 2016-12-20 22:16:31,935 Stage-3 map = 0%,  reduce = 0%
2016-12-20 22:16:37,134 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.79 sec
16/12/20 22:16:37 [main]: INFO exec.Task: 2016-12-20 22:16:37,134 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.79 sec
2016-12-20 22:16:43,394 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 3.11 sec
16/12/20 22:16:43 [main]: INFO exec.Task: 2016-12-20 22:16:43,394 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 3.11 sec
MapReduce Total cumulative CPU time: 3 seconds 110 msec
16/12/20 22:16:44 [main]: INFO exec.Task: MapReduce Total cumulative CPU time: 3 seconds 110 msec
Ended Job = job_1482284999685_0021
16/12/20 22:16:44 [main]: INFO exec.Task: Ended Job = job_1482284999685_0021
16/12/20 22:16:44 [main]: INFO exec.FileSinkOperator: Moving tmp dir: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/_tmp.-mr-10004 to: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10004
16/12/20 22:16:44 [main]: INFO log.PerfLogger: <PERFLOG method=task.CONDITION.Stage-5 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:44 [main]: INFO ql.Driver: Starting task [Stage-5:CONDITIONAL] in serial mode
16/12/20 22:16:44 [main]: INFO plan.ConditionalResolverCommonJoin: Driver alias is [$INTNAME] with size 447899 (total size of others : 18054, threshold : 25000000)
Stage-6 is selected by condition resolver.
16/12/20 22:16:44 [main]: INFO exec.Task: Stage-6 is selected by condition resolver.
Stage-2 is filtered out by condition resolver. /* 此处解释了,Stage-2 被编译器过滤了*/
16/12/20 22:16:44 [main]: INFO exec.Task: Stage-2 is filtered out by condition resolver.
16/12/20 22:16:44 [main]: INFO log.PerfLogger: <PERFLOG method=task.MAPREDLOCAL.Stage-6 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:16:44 [main]: INFO ql.Driver: Starting task [Stage-6:MAPREDLOCAL] in serial mode
16/12/20 22:16:44 [main]: INFO mr.MapredLocalTask: Generating plan file file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10011/plan.xml
16/12/20 22:16:44 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:44 [main]: INFO exec.Utilities: Serializing MapredLocalWork via kryo
16/12/20 22:16:44 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482301004506 end=1482301004516 duration=10 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:44 [main]: INFO mr.MapredLocalTask: Executing: /opt/hadoop-2.6.2//bin/hadoop jar /opt/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10011/plan.xml   -jobconffile file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10012/jobconf.xml
16/12/20 22:16:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Execution log at: /tmp/hadoop/hadoop_20161220221600_6290d359-2941-4748-84f6-75f6fc444d71.log
2016-12-20 22:16:55    Starting to launch local task to process map join;    maximum memory = 477626368
2016-12-20 22:16:57    Dump the side-table for tag: 1 with group count: 64 into file: file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile01--.hashtable
2016-12-20 22:16:57    Uploaded 1 File to: file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile01--.hashtable (8059 bytes)
2016-12-20 22:16:57    End of local task; Time Taken: 2.836 sec.
Execution completed successfully
16/12/20 22:16:58 [main]: INFO exec.Task: Execution completed successfully
MapredLocal task succeeded
16/12/20 22:16:58 [main]: INFO exec.Task: MapredLocal task succeeded
16/12/20 22:16:58 [main]: INFO mr.MapredLocalTask: Execution completed successfully
16/12/20 22:16:58 [main]: INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-4 from=org.apache.hadoop.hive.ql.Driver>
Launching Job 4 out of 4
16/12/20 22:16:58 [main]: INFO ql.Driver: Launching Job 4 out of 4
16/12/20 22:16:58 [main]: INFO ql.Driver: Starting task [Stage-4:MAPRED] in serial mode
Number of reduce tasks is set to 0 since there‘s no reduce operator
16/12/20 22:16:58 [main]: INFO exec.Task: Number of reduce tasks is set to 0 since there‘s no reduce operator
16/12/20 22:16:58 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:58 [main]: INFO mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
16/12/20 22:16:58 [main]: INFO mr.ExecDriver: Archive 1 hash table files to file:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10005/HashTable-Stage-4/Stage-4.tar.gz
16/12/20 22:16:58 [main]: INFO mr.ExecDriver: Upload 1 archive file  fromfile:/home/hadoop/tmpdir/hive/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-local-10005/HashTable-Stage-4/Stage-4.tar.gz to: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10006/HashTable-Stage-4/Stage-4.tar.gz
16/12/20 22:16:58 [main]: INFO mr.ExecDriver: Add 1 archive file to distributed cache. Archive file: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10006/HashTable-Stage-4/Stage-4.tar.gz
16/12/20 22:16:58 [main]: INFO exec.Utilities: Processing alias $INTNAME
16/12/20 22:16:58 [main]: INFO exec.Utilities: Adding input file hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003
16/12/20 22:16:58 [main]: INFO exec.Utilities: Content Summary not cached for hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003
16/12/20 22:16:58 [main]: INFO ql.Context: New scratch dir is hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1
16/12/20 22:16:58 [main]: INFO log.PerfLogger: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:58 [main]: INFO exec.Utilities: Serializing MapWork via kryo
16/12/20 22:16:58 [main]: INFO log.PerfLogger: </PERFLOG method=serializePlan start=1482301018690 end=1482301018717 duration=27 from=org.apache.hadoop.hive.ql.exec.Utilities>
16/12/20 22:16:58 [main]: ERROR mr.ExecDriver: yarn
16/12/20 22:16:58 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:58 [main]: INFO client.RMProxy: Connecting to ResourceManager at qxy1/192.168.233.159:8032
16/12/20 22:16:58 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/map.xml
16/12/20 22:16:58 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/reduce.xml
16/12/20 22:16:58 [main]: INFO exec.Utilities: ***************non-local mode***************
16/12/20 22:16:58 [main]: INFO exec.Utilities: local path = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/reduce.xml
16/12/20 22:16:58 [main]: INFO exec.Utilities: Open file to read in plan: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/reduce.xml
16/12/20 22:16:58 [main]: INFO exec.Utilities: File not found: File does not exist: /tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/reduce.xml
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1893)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1834)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1814)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1786)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:552)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

16/12/20 22:16:58 [main]: INFO exec.Utilities: No plan file found: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/reduce.xml
16/12/20 22:16:58 [main]: WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/20 22:16:59 [main]: INFO log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:59 [main]: INFO exec.Utilities: PLAN PATH = hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10014/dd707887-351d-4106-a1c0-fbcee0a27432/map.xml
16/12/20 22:16:59 [main]: INFO io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
16/12/20 22:16:59 [main]: INFO io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003; using filter path hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10003
16/12/20 22:16:59 [main]: INFO input.FileInputFormat: Total input paths to process : 1
16/12/20 22:16:59 [main]: INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 2, size left: 0
16/12/20 22:16:59 [main]: INFO io.CombineHiveInputFormat: number of splits 1
16/12/20 22:16:59 [main]: INFO io.CombineHiveInputFormat: Number of all splits 1
16/12/20 22:16:59 [main]: INFO log.PerfLogger: </PERFLOG method=getSplits start=1482301019140 end=1482301019148 duration=8 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
16/12/20 22:16:59 [main]: INFO mapreduce.JobSubmitter: number of splits:1
16/12/20 22:16:59 [main]: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1482284999685_0022
16/12/20 22:16:59 [main]: INFO impl.YarnClientImpl: Submitted application application_1482284999685_0022
16/12/20 22:16:59 [main]: INFO mapreduce.Job: The url to track the job: http://qxy1:8088/proxy/application_1482284999685_0022/
Starting Job = job_1482284999685_0022, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0022/
16/12/20 22:16:59 [main]: INFO exec.Task: Starting Job = job_1482284999685_0022, Tracking URL = http://qxy1:8088/proxy/application_1482284999685_0022/
Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0022
16/12/20 22:16:59 [main]: INFO exec.Task: Kill Command = /opt/hadoop-2.6.2//bin/hadoop job  -kill job_1482284999685_0022
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
16/12/20 22:17:03 [main]: INFO exec.Task: Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
16/12/20 22:17:03 [main]: WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-12-20 22:17:03,473 Stage-4 map = 0%,  reduce = 0%
16/12/20 22:17:03 [main]: INFO exec.Task: 2016-12-20 22:17:03,473 Stage-4 map = 0%,  reduce = 0%
2016-12-20 22:17:09,752 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.57 sec
16/12/20 22:17:09 [main]: INFO exec.Task: 2016-12-20 22:17:09,752 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.57 sec
MapReduce Total cumulative CPU time: 1 seconds 570 msec
16/12/20 22:17:10 [main]: INFO exec.Task: MapReduce Total cumulative CPU time: 1 seconds 570 msec
Ended Job = job_1482284999685_0022
16/12/20 22:17:10 [main]: INFO exec.Task: Ended Job = job_1482284999685_0022
16/12/20 22:17:10 [main]: INFO exec.FileSinkOperator: Moving tmp dir: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10000/.hive-staging_hive_2016-12-20_22-16-00_099_8319005040929428210-1/_tmp.-ext-10001 to: hdfs://qxy1/tmp/hive/hadoop/adaaea53-aac6-4dd7-b86c-53daed294f17/hive_2016-12-20_22-16-00_099_8319005040929428210-1/-mr-10000/.hive-staging_hive_2016-12-20_22-16-00_099_8319005040929428210-1/-ext-10001
16/12/20 22:17:10 [main]: INFO log.PerfLogger: </PERFLOG method=runTasks start=1482300962800 end=1482301030833 duration=68033 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:10 [main]: INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1482300962778 end=1482301030834 duration=68056 from=org.apache.hadoop.hive.ql.Driver>
MapReduce Jobs Launched:
16/12/20 22:17:10 [main]: INFO ql.Driver: MapReduce Jobs Launched:
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.71 sec   HDFS Read: 2027044 HDFS Write: 447899 SUCCESS
16/12/20 22:17:10 [main]: INFO ql.Driver: Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.71 sec   HDFS Read: 2027044 HDFS Write: 447899 SUCCESS
Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 3.11 sec   HDFS Read: 16290 HDFS Write: 18054 SUCCESS
16/12/20 22:17:10 [main]: INFO ql.Driver: Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 3.11 sec   HDFS Read: 16290 HDFS Write: 18054 SUCCESS
Stage-Stage-4: Map: 1   Cumulative CPU: 1.57 sec   HDFS Read: 453508 HDFS Write: 432499 SUCCESS
16/12/20 22:17:10 [main]: INFO ql.Driver: Stage-Stage-4: Map: 1   Cumulative CPU: 1.57 sec   HDFS Read: 453508 HDFS Write: 432499 SUCCESS
Total MapReduce CPU Time Spent: 9 seconds 390 msec
16/12/20 22:17:10 [main]: INFO ql.Driver: Total MapReduce CPU Time Spent: 9 seconds 390 msec
OK
16/12/20 22:17:10 [main]: INFO ql.Driver: OK
16/12/20 22:17:10 [main]: INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:10 [main]: INFO ZooKeeperHiveLockManager:  about to release lock for tmp/cs_dwm_issu_src2std_rel_h_06
16/12/20 22:17:10 [main]: INFO ZooKeeperHiveLockManager:  about to release lock for tmp/cs_dwm_issu_src2std_rel_h_05
16/12/20 22:17:10 [main]: INFO ZooKeeperHiveLockManager:  about to release lock for tmp
16/12/20 22:17:10 [main]: INFO ZooKeeperHiveLockManager:  about to release lock for default
16/12/20 22:17:10 [main]: INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1482301030835 end=1482301030908 duration=73 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:10 [main]: INFO log.PerfLogger: </PERFLOG method=Driver.run start=1482300960066 end=1482301030908 duration=70842 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:10 [main]: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/12/20 22:17:10 [main]: INFO mapred.FileInputFormat: Total input paths to process : 1

16/12/20 22:17:11 [main]: INFO exec.ListSinkOperator: 36 finished. closing...
16/12/20 22:17:11 [main]: INFO exec.ListSinkOperator: 36 Close done
Time taken: 70.845 seconds, Fetched: 8990 row(s)
16/12/20 22:17:11 [main]: INFO CliDriver: Time taken: 70.845 seconds, Fetched: 8990 row(s)
16/12/20 22:17:11 [main]: INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:11 [main]: INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1482301031156 end=1482301031156 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/12/20 22:17:11 [main-EventThread]: INFO zookeeper.ClientCnxn: EventThread shut down
16/12/20 22:17:11 [Thread-8]: INFO zookeeper.ZooKeeper: Session: 0x2591f661ea30000 closed
16/12/20 22:17:11 [Thread-8]: INFO CuratorFrameworkSingleton: Closing ZooKeeper client.

所以,执行计划的顺序为:

Stage-1->Stage-3->Stage-5{Stage-6,Stage-2}->Stage-6->Stage-4->Stage-6

时间: 2024-08-14 10:33:39

hive执行计划简单分析的相关文章

【MS SQL】通过执行计划来分析SQL性能

如何知道一句SQL语句的执行效率呢,只知道下面3种: 1.通过SQL语句执行时磁盘的活动量(IO)信息来分析:SET STATISTICS IO ON (开启) / SET STATISTICS IO OFF (关闭) 2.通过SQL语句执行时语法分析.编译以及执行所消耗的时间:SET STATISTICS TIME ON (开启) / SET STATISTICS TIME OFF (关闭) 3.通过执行计划查看:Ctrl + L -------------------------------

Hive学习之路 (二十)Hive 执行过程实例分析

一.Hive 执行过程概述 1.概述 (1) Hive 将 HQL 转换成一组操作符(Operator),比如 GroupByOperator, JoinOperator 等 (2)操作符 Operator 是 Hive 的最小处理单元 (3)每个操作符代表一个 HDFS 操作或者 MapReduce 作业 (4)Hive 通过 ExecMapper 和 ExecReducer 执行 MapReduce 程序,执行模式有本地模式和分 布式两种模式 2.Hive 操作符列表 3.Hive 编译器的

查看Oracle执行计划的几种常用方法-系列3

续上篇:http://blog.csdn.net/bisal/article/details/39225373 4. 10046事件 通过10046事件也可以查看目标SQL的执行计划.像10046这种事件,都不是Oracle官方文档中可以查询到的,这些事件一般用于调试目的,因此往往可以使用他们找到问题更详细的信息. 10046事件和之前的explain plan.DBMS_XPLAN包以及AUTOTRACE开关的区别在于,10046事件产生的trc文件中明确显示了目标SQL实际执行计划中每一步所

如何看懂ORACLE执行计划

如何看懂Oracle执行计划 一.什么是执行计划 An explain plan is a representation of the access path that is taken when a query is executed within Oracle. 二.如何访问数据 At the physical level Oracle reads blocks of data. The smallest amount of data read is a single Oracle bloc

MySQL执行计划 EXPLAIN参数

MySQL执行计划参数详解 转http://www.jianshu.com/p/7134286b3a09 MySQL数据库中,在SELECT查询语句前边加上“EXPLAIN”或者“DESC”关键字,即可查看该查询语句的执行计划,分析执行计划是优化慢查询的重要手段.如: EXPLAIN SELECT * FROM school; DESC SELECT * FROM school; 执行结果: 执行计划参数.png 接下来对这10个参数进行简单解释: 1.id:在整个查询中SELECT的位置: 2

了解SQL Server执行计划

当需要分析某个查询的效能时,最好的方式之一查看这个查询的执行计划.执行计划描述SQL Server查询优化器如何实际运行(或者将会如何运行)一个特定的查询. 查看查询的执行计划有几种不同的方式.它们包括: SQL Server查询分析器里有一个叫做”显示实际执行计划”的选项(位于”查询”下拉菜单中).如果打开了这个选项,那么无论何时在查询分析器中运行一个查询,都会得到一个显示在单独窗口的查询执行计划(以图形的格式). 如果只是想看下执行计划而不想运行查询,那么可以选择”显示预估的执行计划”选项(

了解Sql Server的执行计划

前一篇总结了Sql Server Profiler,它主要用来监控数据库,并跟踪生成的sql语句.但是只拿到生成的sql语句没有什么用,我们可以利用这些sql语句,然后结合执行计划来分析sql语句的性能问题,这才是我们的最终目的,那么如何使用执行计划呢?我准备从以下几点来总结. 如何启动执行计划 执行计划结果要看什么 Sql Server的五种查找方式 查看更具体的执行过程 如何启动执行计划 运行一条sql,并且在工具栏中选中'Include Actual Execution Plan'按钮,此

oracle 执行计划及索引学习计划

SQL 索引技术分享-内容计划 ORACLE 执行技术 1)  执行计划是什么? 2)  执行计划怎么分析出最佳路径? 3)  使用pl/sql来演示如何查看执行计划? 4)  执行计划的好处 什么是索引? 1)  索引的概念 2)  索引相关概念,如rowid,节点图等 3)  如何创建索引?创建索引的实例 4)  如何修改索引? 5)  如何删除索引? 6)  索引的优缺点 7)  应该创建索引的情景 8)  不该创建索引的情景 9)  限制索引 --即一些不当的索引使用方式,如使用不等操作

一个RDBMS左连接SQL执行计划解析

1.测试数据如下: SQL> select * from t1;  a | b  | c ---+----+---  1 | 10 | 1  2 | 20 | 2  3 | 30 | 3  4 | 40 | 4  5 | 50 | 5  6 | 60 | 6 (6 rows) SQL> select * from t2;  a | b  | d ---+----+---  1 | 10 | 1  2 | 20 | 2  3 | 30 | 3 (3 rows) 2.解析示例SQL 如下 : se