Hive中快速复制一张分区表的结构和数据

您所在的位置:网站首页 分区拷贝 Hive中快速复制一张分区表的结构和数据

Hive中快速复制一张分区表的结构和数据

2024-07-17 12:49:33| 来源: 网络整理| 查看: 265

首先,来回顾下在hive中创建表的三种方式: 直接创建表:create table table_name (column1 type1, column2 type2, …) 创建新表只复制旧表的结构:create table new_table_name like old_table_name 创建新表复制旧表的结构和数据:create table new_table_name as select * from old_table_name

看到上面的需求和hive中创建表的三种方式,如果是非分区表,我们可以选择上面第三种创建表的方式复制表 但在实际开发中,hive中的表大部分为分区表,针对分区表,我们可以使用上面的第二种方式先复制旧分区表的表结构,再使用动态分区,将旧表的分区数据insert到新表中,虽然能够实现功能,但并非是效率最高的

如果想提高效率,可以使用以下步骤:

1.create table new_partition like old_partition;

2.使用hdfs dfs -cp 命令,把old_partition对应的HDFS目录的文件夹全部拷贝到new_partition对应的目录下

3.使用msck repair table new_table;修复新表的分区元数据

测试下:

在我们数据库中有张表

> select * from rt_data; OK rt_data.shop_id rt_data.stat_date rt_data.ordamt 10026 201501120030 5170.0 10026 201501120100 5669.0 10026 201501120130 2396.0 10026 201501120200 1498.0 10026 201501120230 1997.0 10026 201501120300 1188.0 10026 201501120330 598.0 10026 201501120400 479.0 10026 201501120430 1587.0 10026 201501120530 799.0 10027 201501120030 2170.0 10027 201501120100 1623.0 10027 201501120130 3397.0 10027 201501120200 1434.0 10027 201501120230 1001.0 10028 201501120300 1687.0 10028 201501120330 1298.0 10028 201501120400 149.0 10029 201501120430 2587.0 10029 201501120530 589.0 Time taken: 12.712 seconds, Fetched: 20 row(s)

创建一张分区表

> create table old_partition (shop_id int, stat_date string, ordamt double) partitioned by (daystr string); OK Time taken: 0.17 seconds

设置动态分区

set hive.exec.dynamic.partition.mode=nonstrict;

将rt_data的数据动态插入old_partition表中

insert into table old_partition partition (daystr) > select shop_id, stat_date, ordamt, stat_date from rt_data; Query ID = rdedu_20180729105529_0d16042f-5d27-4616-826f-bc5e02067767 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1532570467965_0057, Tracking URL = http://bigdata-training01.erongda.com:8088/proxy/application_1532570467965_0057/ Kill Command = /opt/cdh-5.7.6/hadoop-2.6.0-cdh5.7.6/bin/mapred job -kill job_1532570467965_0057 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2018-07-29 10:59:14,083 Stage-1 map = 0%, reduce = 0% 2018-07-29 11:00:04,874 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 20.9 sec 2018-07-29 11:00:17,029 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 23.07 sec MapReduce Total cumulative CPU time: 23 seconds 70 msec Ended Job = job_1532570467965_0057 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to directory hdfs://bigdata-training01.erongda.com:8020/user/hive/warehouse-3.1.1/db_window.db/old_partition/.hive-staging_hive_2018-07-29_10-55- 29_054_1770048812106415069-1/-ext-10000Loading data to table db_window.old_partition partition (daystr=null) Time taken to load dynamic partitions: 1.588 seconds Time taken for adding to write entity : 0.014 seconds MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 23.07 sec HDFS Read: 18383 HDFS Write: 3075 SUCCESS Total MapReduce CPU Time Spent: 23 seconds 70 msec OK shop_id stat_date ordamt stat_date Time taken: 292.625 seconds

注意:这样动态插入花费的时间是292.625s,才二十几条数据,花了这么长的时间,如果是上千万条数据,可以想象所需要的处理时间会有多长。

查看分区

> show partitions old_partition; OK partition daystr=201501120030 daystr=201501120100 daystr=201501120130 daystr=201501120200 daystr=201501120230 daystr=201501120300 daystr=201501120330 daystr=201501120400 daystr=201501120430 daystr=201501120530 Time taken: 0.112 seconds, Fetched: 10 row(s)

先设置hive.mapred.mode=nonstrict再查看old_partition中的数据 如果未设置此参数,select分区表时必须指定分区,不然会报错

> set hive.mapred.mode=nonstrict > select * from old_partition; OK old_partition.shop_id old_partition.stat_date old_partition.ordamt old_partition.daystr 10026 201501120030 5170.0 201501120030 10027 201501120030 2170.0 201501120030 10026 201501120100 5669.0 201501120100 10027 201501120100 1623.0 201501120100 10026 201501120130 2396.0 201501120130 10027 201501120130 3397.0 201501120130 10026 201501120200 1498.0 201501120200 10027 201501120200 1434.0 201501120200 10026 201501120230 1997.0 201501120230 10027 201501120230 1001.0 201501120230 10026 201501120300 1188.0 201501120300 10028 201501120300 1687.0 201501120300 10026 201501120330 598.0 201501120330 10028 201501120330 1298.0 201501120330 10026 201501120400 479.0 201501120400 10028 201501120400 149.0 201501120400 10026 201501120430 1587.0 201501120430 10029 201501120430 2587.0 201501120430 10026 201501120530 799.0 201501120530 10029 201501120530 589.0 201501120530 Time taken: 0.425 seconds, Fetched: 20 row(s)

使用第二种方法复制旧分区表结构和数据

> create table new_partition like old_partition; OK Time taken: 0.224 seconds > show create table new_partition; OK createtab_stmt CREATE TABLE `new_partition`( `shop_id` int, `stat_date` string, `ordamt` double) PARTITIONED BY ( `daystr` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://bigdata-training01.fuyun.com:8020/user/hive/warehouse-3.1.1/db_window.db/new_partition' TBLPROPERTIES ( 'transient_lastDdlTime'='1532834516') Time taken: 0.17 seconds, Fetched: 16 row(s)

使用hdfs dfs -cp 命令,把old_partition对应的HDFS目录的文件夹全部拷贝到new_partition对应的目录下

cd /opt/cdh-5.7.6/hadoop-2.6.0-cdh5.7.6 bin/hdfs dfs -cp /user/hive/warehouse-3.1.1/db_window.db/old_partition/* /user/hive/warehouse-3.1.1/db_window.db/new_partition bin/hdfs dfs -ls /user/hive/warehouse-3.1.1/db_window.db/new_partition Found 10 items drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120030 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120100 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120130 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120200 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120230 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120300 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120330 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120400 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120430 drwxr-xr-x - rdedu supergroup 0 2018-07-29 11:25 /user/hive/warehouse-3.1.1/db_window.db/new_partition/daystr=201501120530

到hive中查看new_partition表的分区情况

> show partitions new_partition; OK partition Time taken: 0.124 seconds

发现并没有分区,使用msck repair table new_table;修复源数据再查看分区情况

> msck repair table new_partition; OK Partitions not in metastore: new_partition:daystr=201501120030 new_partition:daystr=201501120100 new_partition:daystr=201501120130 new_p artition:daystr=201501120200 new_partition:daystr=201501120230 new_partition:daystr=201501120300 new_partition:daystr=201501120330 new_partition:daystr=201501120400 new_partition:daystr=201501120430 new_partition:daystr=201501120530Repair: Added partition to metastore new_partition:daystr=201501120530 Repair: Added partition to metastore new_partition:daystr=201501120330 Repair: Added partition to metastore new_partition:daystr=201501120230 Repair: Added partition to metastore new_partition:daystr=201501120030 Repair: Added partition to metastore new_partition:daystr=201501120130 Repair: Added partition to metastore new_partition:daystr=201501120100 Repair: Added partition to metastore new_partition:daystr=201501120200 Repair: Added partition to metastore new_partition:daystr=201501120300 Repair: Added partition to metastore new_partition:daystr=201501120400 Repair: Added partition to metastore new_partition:daystr=201501120430 Time taken: 0.742 seconds, Fetched: 11 row(s) > show partitions new_partition; OK partition daystr=201501120030 daystr=201501120100 daystr=201501120130 daystr=201501120200 daystr=201501120230 daystr=201501120300 daystr=201501120330 daystr=201501120400 daystr=201501120430 daystr=201501120530 Time taken: 0.082 seconds, Fetched: 10 row(s)

这样新的分区表已经复制好,新的分区表和旧的分区表有一样的结构和数据,可以从上面两种测试中看出,第一种复制的方法时间明显比第二种方法慢很多,并且测试数据只有几十条,如果上千万的数据会更慢更明显。



【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


    图片新闻

    实验室药品柜的特性有哪些
    实验室药品柜是实验室家具的重要组成部分之一,主要
    小学科学实验中有哪些教学
    计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
    实验室各种仪器原理动图讲
    1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
    高中化学常见仪器及实验装
    1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
    微生物操作主要设备和器具
    今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
    浅谈通风柜使用基本常识
     众所周知,通风柜功能中最主要的就是排气功能。在

    专题文章

      CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭