这几天有个朋友让我帮他优化mysql百万级操作db的事。于是我就答应了……。优化完个人做个笔记。给大家一个参考……如果有更好的方法,或建议可以联系[email protected]
程序员不想做解释直接上代码:
public boolean test(String filePath) throws Exception{ String sql = "LOAD DATA INFILE ‘"+filePath+"‘ REPLACE INTO TABLE t_table FIELDS TERMINATED BY ‘\r\n‘ lines terminated by ‘\r\n‘ (terminal_id)"; PreparedStatement pstmt =jdbcTemplate.getDataSource().getConnection().prepareStatement(sql); pstmt.execute(); return false; }
表里面也只有一个字段,如果文件有要插入多个字段就要以文件中的分隔符来分隔,注意点 如不懂可以网上查查 LOAD DATA INFILE的用法…… (本文不是给伸手党准备的,见谅)
这个是springmvc上传文件上后台。然后后处理的controller类中的代码如果下
@RequestMapping(value = "/batchAdd", method = RequestMethod.POST) @ResponseBody public void batchAdd(@RequestParam(value="addBatchFile",required = false) MultipartFile uploadfile){ String msg = "批量导入出错"; try { long start = System.currentTimeMillis(); System.out.println(start); String name="file"+System.currentTimeMillis()+".txt"; File file = new File(name); uploadfile.transferTo(file); //此处要注意windows和linux的File.separator不一样…… 这里还要测一下的 service.test(file.getAbsolutePath().replaceAll("\\\\", "//")); long end = System.currentTimeMillis(); System.out.println(end); System.out.println("共花费" +(end-start)); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } ........ }
这个样插入mysql的500w之内大概3分钟之内能完成。
==================优美的分隔线==========================================
百万级的删除
@RequestMapping(value = "/batchDelete", method = RequestMethod.POST) @ResponseBody public void batchDelete(@RequestParam(value="deleteBatchFile",required = false) MultipartFile uploadfile){ List<String> list = new ArrayList<String>(); try { CommonsMultipartFile cf= (CommonsMultipartFile)uploadfile; DiskFileItem fi = (DiskFileItem)cf.getFileItem(); File f = fi.getStoreLocation(); List<String> data =FileUtils.readLines(f); service.updateBatchDel("delete from t_table where terminal_id= ?",data); long end = System.currentTimeMillis(); System.out.println(end); System.out.println(end-start); code = 0; }catch (Exception e){ } return "xxxxxxxx"; }
service 的代码如下
public boolean updateBatchDel(String sql,List<String> data){ boolean flag = false; PreparedStatement pstmt = null; Connection con = null; try { con = jdbcTemplate.getDataSource().getConnection(); con.setAutoCommit(false); pstmt = con.prepareStatement(sql); for(int i =0 ;i<data.size();i++){ pstmt.setString(1,data.get(i).trim()); pstmt.addBatch(); } System.out.println("------"); pstmt.executeBatch(); //批量执行 con.commit();//提交事务 flag = true; } catch (SQLException e) { try { con.rollback(); //进行事务回滚 } catch (SQLException ex) { ex.printStackTrace(); } }finally { try { pstmt.close(); con.close(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } } return flag; }
这样删除百万级的数据也只在2分钟之内。
时间: 2024-09-29 16:36:12